Results 1  10
of
148
Nonlinear Programming without a penalty function
 Mathematical Programming
, 2000
"... In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is in ..."
Abstract

Cited by 244 (30 self)
 Add to MetaCart
In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl 1 QP.
Detecting Concept Drift with Support Vector Machines
 In Proceedings of the Seventeenth International Conference on Machine Learning (ICML
, 2000
"... For many learning tasks where data is collected over an extended period of time, its underlying distribution is likely to change. A typical example is information filtering, i.e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and th ..."
Abstract

Cited by 120 (8 self)
 Add to MetaCart
(Show Context)
For many learning tasks where data is collected over an extended period of time, its underlying distribution is likely to change. A typical example is information filtering, i.e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and the document content change over time. A filtering system should be able to adapt to such concept changes. This paper proposes a new method to recognize and handle concept changes with support vector machines. The method maintains a window on the training data. The key idea is to automatically adjust the window size so that the estimated generalization error is minimized. The new approach is both theoretically wellfounded as well as effective and efficient in practice. Since it does not require complicated parameterization, it is simpler to use and more robust than comparable heuristics. Experiments with simulated concept drift scenarios based on realworld text data com...
A taxonomy for multiagent robotics
 AUTONOMOUS ROBOTS
, 1996
"... A key difficulty in the design of multiagent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system configurations is essential. To this end, we present a taxonomy that classifies multia ..."
Abstract

Cited by 97 (6 self)
 Add to MetaCart
(Show Context)
A key difficulty in the design of multiagent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system configurations is essential. To this end, we present a taxonomy that classifies multiagent systems according to communication, computational and other capabilities. We survey existing efforts involving multiagent systems according to their positions in the taxonomy. We also present additional results concerning multiagent systems, with the dual purposes of illustrating the usefulness of the taxonomy in simplifying discourse about robot collective properties, and also demonstrating that a collective can be demonstrably more powerful than a single unit of the collective.
Numerical experience with lower bounds for MIQP branchandbound
, 1995
"... The solution of convex Mixed Integer Quadratic Programming (MIQP) problems with a general branchandbound framework is considered. It is shown how lower bounds can be computed efficiently during the branchandbound process. Improved lower bounds such as the ones derived in this paper can reduc ..."
Abstract

Cited by 65 (0 self)
 Add to MetaCart
The solution of convex Mixed Integer Quadratic Programming (MIQP) problems with a general branchandbound framework is considered. It is shown how lower bounds can be computed efficiently during the branchandbound process. Improved lower bounds such as the ones derived in this paper can reduce the number of QP problems that have to be solved. The branchandbound approach is also shown to be superior to other approaches to solving MIQP problems. Numerical experience is presented which supports these conclusions. Key words : Integer Programming, Mixed Integer Quadratic Programming, BranchandBound AMS subject classification: 90C10, 90C11, 90C20 1 Introduction One of the most successful methods for solving mixedinteger nonlinear problems is branchandbound. Land and Doig [16] first introduced a branchandbound algorithm for the travelling salesman problem. Dakin [3] introduced the now common branching dichotomy and was the first to realize that it is possible to so...
A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
 SIAM J. Optim
, 1999
"... . Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. However, the strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, w ..."
Abstract

Cited by 56 (9 self)
 Add to MetaCart
(Show Context)
. Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. However, the strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, which are similar to that required by the Zoutendijk condition. Key words. unconstrained optimization, new conjugate gradient method, Wolfe conditions, global convergence. AMS subject classifications. 65k, 90c 1. Introduction. Our problem is to minimize a function of n variables min f(x); (1.1) where f is smooth and its gradient g(x) is available. Conjugate gradient methods for solving (1.1) are iterative methods of the form x k+1 = x k + ff k d k ; (1.2) where ff k ? 0 is a steplength, d k is a search direction. Normally the search direction at...
Integrating SQP and branchandbound for Mixed Integer Nonlinear Programming
 Computational Optimization and Applications
, 1998
"... This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving Nonlinear Programming (NLP) and Mixed Integer Linear Programming problems. In contrast, an integrated approach to solving MINLP problems is considered here. This new algorithm is based on branchandbound, but does not require the NLP problem at each node to be solved to optimality. Instead, branching is allowed after each iteration of the NLP solver. In this way, the nonlinear part of the MINLP problem is solved whilst searching the tree. The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver. A numerical comparison of the new method with nonlinear branchandbound is presented and a factor of about 3 improvement over branchandbound is observed...
OPTIMALITY, COMPUTATION, AND INTERPRETATION OF NONNEGATIVE MATRIX FACTORIZATIONS
 SIAM JOURNAL ON MATRIX ANALYSIS
, 2004
"... The notion of low rank approximations arises from many important applications. When the low rank data are further required to comprise nonnegative values only, the approach by nonnegative matrix factorization is particularly appealing. This paper intends to bring about three points. First, the theor ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
The notion of low rank approximations arises from many important applications. When the low rank data are further required to comprise nonnegative values only, the approach by nonnegative matrix factorization is particularly appealing. This paper intends to bring about three points. First, the theoretical KuhnTucker optimality condition is described in explicit form. Secondly, a number of numerical techniques, old and new, are suggested for the nonnegative matrix factorization problems. Thirdly, the techniques are employed to two realworld applications to demonstrate the di#culty in interpreting the factorizations.
Automatic preconditioning by limited memory QuasiNewton updating
 SIAM J. OPTIM
, 1999
"... The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with different right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with different right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coefficient matrix A and is therefore suitable for problems where only products of A times avector can be computed. Numerical experiments indicate that the preconditioner has most to offer when these matrixvector products are expensive to compute, and when low accuracy in the solution is required. The effectiveness of the preconditioner is tested within a Hessianfree Newton method for optimization, and by solving certain linear systems arising in finite element models.
On The Maximization Of A Concave Quadratic Function With Box Constraints
, 1994
"... . We introduce a new method for maximizing a concave quadratic function with bounds on the variables. The new algorithm combines conjugate gradients with gradient projection techniques, as the algorithm of Mor'e and Toraldo (SIAM J. on Optimization 1, pp. 93113) and other wellknown methods do ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
. We introduce a new method for maximizing a concave quadratic function with bounds on the variables. The new algorithm combines conjugate gradients with gradient projection techniques, as the algorithm of Mor'e and Toraldo (SIAM J. on Optimization 1, pp. 93113) and other wellknown methods do. A new strategy for the decision of leaving the current face is introduced, that makes it possible to obtain finite convergence even for a singular Hessian and in the presence of dual degeneracy. We present numerical experiments. November 4, 1992 0() Work supported by FAPESP (Grant 90/3724/6), FINEP, CNPq and FAEPUNICAMP. This paper appeared in SIAM Journal on Optimization 4 (1994) 177192 1. Introduction. In this paper, we consider the problem of maximizing a concave quadratic function subject to bounds on the variables. This problem (or its equivalent one: minimizing a convex quadratic function on a box) appears frequently in applications, for instance in finite difference discretization ...