Results 1  10
of
77
Nonlinear Programming without a penalty function
 Mathematical Programming
, 2000
"... In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is introduced w ..."
Abstract

Cited by 164 (27 self)
 Add to MetaCart
In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl 1 QP.
Detecting Concept Drift with Support Vector Machines
 In Proceedings of the Seventeenth International Conference on Machine Learning (ICML
, 2000
"... For many learning tasks where data is collected over an extended period of time, its underlying distribution is likely to change. A typical example is information filtering, i.e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and th ..."
Abstract

Cited by 91 (8 self)
 Add to MetaCart
For many learning tasks where data is collected over an extended period of time, its underlying distribution is likely to change. A typical example is information filtering, i.e. the adaptive classification of documents with respect to a particular user interest. Both the interest of the user and the document content change over time. A filtering system should be able to adapt to such concept changes. This paper proposes a new method to recognize and handle concept changes with support vector machines. The method maintains a window on the training data. The key idea is to automatically adjust the window size so that the estimated generalization error is minimized. The new approach is both theoretically wellfounded as well as effective and efficient in practice. Since it does not require complicated parameterization, it is simpler to use and more robust than comparable heuristics. Experiments with simulated concept drift scenarios based on realworld text data com...
A taxonomy for multiagent robotics
 AUTONOMOUS ROBOTS
, 1996
"... A key difficulty in the design of multiagent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system configurations is essential. To this end, we present a taxonomy that classifies multia ..."
Abstract

Cited by 81 (6 self)
 Add to MetaCart
A key difficulty in the design of multiagent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system configurations is essential. To this end, we present a taxonomy that classifies multiagent systems according to communication, computational and other capabilities. We survey existing efforts involving multiagent systems according to their positions in the taxonomy. We also present additional results concerning multiagent systems, with the dual purposes of illustrating the usefulness of the taxonomy in simplifying discourse about robot collective properties, and also demonstrating that a collective can be demonstrably more powerful than a single unit of the collective.
Numerical experience with lower bounds for MIQP branchandbound
, 1995
"... The solution of convex Mixed Integer Quadratic Programming (MIQP) problems with a general branchandbound framework is considered. It is shown how lower bounds can be computed efficiently during the branchandbound process. Improved lower bounds such as the ones derived in this paper can reduc ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
The solution of convex Mixed Integer Quadratic Programming (MIQP) problems with a general branchandbound framework is considered. It is shown how lower bounds can be computed efficiently during the branchandbound process. Improved lower bounds such as the ones derived in this paper can reduce the number of QP problems that have to be solved. The branchandbound approach is also shown to be superior to other approaches to solving MIQP problems. Numerical experience is presented which supports these conclusions. Key words : Integer Programming, Mixed Integer Quadratic Programming, BranchandBound AMS subject classification: 90C10, 90C11, 90C20 1 Introduction One of the most successful methods for solving mixedinteger nonlinear problems is branchandbound. Land and Doig [16] first introduced a branchandbound algorithm for the travelling salesman problem. Dakin [3] introduced the now common branching dichotomy and was the first to realize that it is possible to so...
OPTIMALITY, COMPUTATION, AND INTERPRETATION OF NONNEGATIVE MATRIX FACTORIZATIONS
 SIAM JOURNAL ON MATRIX ANALYSIS
, 2004
"... The notion of low rank approximations arises from many important applications. When the low rank data are further required to comprise nonnegative values only, the approach by nonnegative matrix factorization is particularly appealing. This paper intends to bring about three points. First, the theor ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
The notion of low rank approximations arises from many important applications. When the low rank data are further required to comprise nonnegative values only, the approach by nonnegative matrix factorization is particularly appealing. This paper intends to bring about three points. First, the theoretical KuhnTucker optimality condition is described in explicit form. Secondly, a number of numerical techniques, old and new, are suggested for the nonnegative matrix factorization problems. Thirdly, the techniques are employed to two realworld applications to demonstrate the di#culty in interpreting the factorizations.
On The Maximization Of A Concave Quadratic Function With Box Constraints
, 1994
"... . We introduce a new method for maximizing a concave quadratic function with bounds on the variables. The new algorithm combines conjugate gradients with gradient projection techniques, as the algorithm of Mor'e and Toraldo (SIAM J. on Optimization 1, pp. 93113) and other wellknown methods do. A n ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
. We introduce a new method for maximizing a concave quadratic function with bounds on the variables. The new algorithm combines conjugate gradients with gradient projection techniques, as the algorithm of Mor'e and Toraldo (SIAM J. on Optimization 1, pp. 93113) and other wellknown methods do. A new strategy for the decision of leaving the current face is introduced, that makes it possible to obtain finite convergence even for a singular Hessian and in the presence of dual degeneracy. We present numerical experiments. November 4, 1992 0() Work supported by FAPESP (Grant 90/3724/6), FINEP, CNPq and FAEPUNICAMP. This paper appeared in SIAM Journal on Optimization 4 (1994) 177192 1. Introduction. In this paper, we consider the problem of maximizing a concave quadratic function subject to bounds on the variables. This problem (or its equivalent one: minimizing a convex quadratic function on a box) appears frequently in applications, for instance in finite difference discretization ...
Automatic preconditioning by limited memory QuasiNewton updating
 SIAM J. Optim
"... The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton m ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coe cient matrix A and is therefore suitable for problems where only products of A times avector can be computed. Numerical experiments indicate that the preconditioner has most to o er when these matrixvector products are expensive to compute, and when low accuracy in the solution is required. The e ectiveness of the preconditioner is tested within a Hessianfree Newton method for optimization, and by solving certain linear systems arising in nite element models.
Stable Numerical Algorithms for Equilibrium Systems
 SIAM J. Matrix Anal. Appl
, 1992
"... An equilibrium system (also known as a KKT system, a saddlepoint system, or a sparse tableau) is a square linear system with a certain structure. G. Strang has observed that equilibrium systems arise in optimization, finite elements, structural analysis, and electrical networks. Recently, G. W. Stew ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
An equilibrium system (also known as a KKT system, a saddlepoint system, or a sparse tableau) is a square linear system with a certain structure. G. Strang has observed that equilibrium systems arise in optimization, finite elements, structural analysis, and electrical networks. Recently, G. W. Stewart established a norm bound for a type of equilibrium system in the case that the "stiffness" portion of the system is very illconditioned. In this paper we investigate the algorithmic implications of Stewart's result. We show that all standard textbook algorithms for equilibrium systems are unstable. Then we show that a certain hybrid method has the right stability property. 1 Equilibrium systems Recently, Strang [1986] has observed that the problem of solving the structured linear system / D \GammaA A T 0 !/ x y ! = / b c ! (1) This work supported by an NSF Presidential Young Investigator grant, with matching funds received from Xerox Corp. y Department of Computer Scie...
A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
 SIAM J. Optim
, 1999
"... . Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. However, the strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, w ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
. Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. However, the strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, which are similar to that required by the Zoutendijk condition. Key words. unconstrained optimization, new conjugate gradient method, Wolfe conditions, global convergence. AMS subject classifications. 65k, 90c 1. Introduction. Our problem is to minimize a function of n variables min f(x); (1.1) where f is smooth and its gradient g(x) is available. Conjugate gradient methods for solving (1.1) are iterative methods of the form x k+1 = x k + ff k d k ; (1.2) where ff k ? 0 is a steplength, d k is a search direction. Normally the search direction at...