Results 1  10
of
106
LargeScale ActiveSet BoxConstrained Optimization Method with Spectral Projected Gradients
 Computational Optimization and Applications
, 2001
"... A new activeset method for smooth boxconstrained minimization is introduced. The algorithm combines an unconstrained method, including a new linesearch which aims to add many constraints to the working set at a single iteration, with a recently introduced technique (spectral projected gradien ..."
Abstract

Cited by 60 (11 self)
 Add to MetaCart
A new activeset method for smooth boxconstrained minimization is introduced. The algorithm combines an unconstrained method, including a new linesearch which aims to add many constraints to the working set at a single iteration, with a recently introduced technique (spectral projected gradient) for dropping constraints from the working set. Global convergence is proved. A computer implementation is fully described and a numerical comparison assesses the reliability of the new algorithm. Keywords: Boxconstrained minimization, numerical methods, activeset strategies, Spectral Projected Gradient. 1
An interior point algorithm for largescale nonlinear . . .
, 2002
"... Nonlinear programming (NLP) has become an essential tool in process engineering, leading to prot gains through improved plant designs and better control strategies. The rapid advance in computer technology enables engineers to consider increasingly complex systems, where existing optimization codes ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
Nonlinear programming (NLP) has become an essential tool in process engineering, leading to prot gains through improved plant designs and better control strategies. The rapid advance in computer technology enables engineers to consider increasingly complex systems, where existing optimization codes reach their practical limits. The objective of this dissertation is the design, analysis, implementation, and evaluation of a new NLP algorithm that is able to overcome the current bottlenecks, particularly in the area of process engineering. The proposed algorithm follows an interior point approach, thereby avoiding the combinatorial complexity of identifying the active constraints. Emphasis is laid on exibility in the computation of search directions, which allows the tailoring of the method to individual applications and is mandatory for the solution of very large problems. In a fullspace version the method can be used as general purpose NLP solver, for example in modeling environments such as Ampl. The reduced space version, based on coordinate decomposition, makes it possible to tailor linear algebra
An Algorithm for Nonlinear Optimization Using Linear Programming and Equality Constrained Subproblems
, 2003
"... This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the activ ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
(Show Context)
This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the ` 1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the linear program.
A new active set algorithm for box constrained optimization
 SIAM J. Optim
"... ..."
(Show Context)
A Nonnnegatively Constrained Convex Programming Method for Image Reconstruction
 SIAM Journal on Scientific Computing
"... Abstract. We consider a largescale convex minimization problem with nonnegativity constraints that arises in astronomical imaging. We develop a cost functional which incorporates the statistics of the noise in the image data and Tikhonov regularization to induce stability. We introduce an efficien ..."
Abstract

Cited by 27 (15 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a largescale convex minimization problem with nonnegativity constraints that arises in astronomical imaging. We develop a cost functional which incorporates the statistics of the noise in the image data and Tikhonov regularization to induce stability. We introduce an efficient hybrid gradient projectionreduced Newton (active set) method. By “reduced Newton” we mean taking Newton steps only in the inactive variables. Due to the large size of our problem, we compute approximate reduced Newton steps using conjugate gradient (CG) iteration. We also introduce a highly effective sparse preconditioner that dramatically speeds up CG convergence. A numerical comparison between our method and other standard largescale constrained minimization algorithms is presented.
PETSc users manual
 Tech. Rep. ANL95/11  Revision 2.1.5, Argonne National Laboratory
, 2004
"... This work was supported by the Mathematical, Information, and Computational Sciences ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
(Show Context)
This work was supported by the Mathematical, Information, and Computational Sciences
Optimality measures for performance profiles
 Preprint ANL/MCSP11550504, Mathematics and Computer Science Division, Argonne National Lab
, 2004
"... We examine the influence of optimality measures on the benchmarking process, and show that scaling requirements lead to a convergence test for nonlinearly constrained solvers that uses a mixture of absolute and relative error measures. We show that this convergence test is well behaved at any point ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We examine the influence of optimality measures on the benchmarking process, and show that scaling requirements lead to a convergence test for nonlinearly constrained solvers that uses a mixture of absolute and relative error measures. We show that this convergence test is well behaved at any point where the constraints satisfy the MangasarianFromovitz constraint qualification and also avoids the explicit use of a complementarity measure. Our computational experiments explore the impact of this convergence test on the benchmarking process with performance profiles. 1
Pairwise Ranking Aggregation in a Crowdsourced Setting
"... Inferring rankings over elements of a set of objects, such as documents or images, is a key learning problem for such important applications as Web search and recommender systems. Crowdsourcing services provide an inexpensive and efficient means to acquire preferences over objects via labeling by se ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Inferring rankings over elements of a set of objects, such as documents or images, is a key learning problem for such important applications as Web search and recommender systems. Crowdsourcing services provide an inexpensive and efficient means to acquire preferences over objects via labeling by sets of annotators. We propose a new model to predict a goldstandard ranking that hinges on combining pairwise comparisons via crowdsourcing. In contrast to traditional ranking aggregation methods, the approach learns about and folds into consideration the quality of contributions of each annotator. In addition, we minimize the cost of assessment by introducing a generalization of the traditional active learning scenario to jointly select the annotator and pair to assess while taking into account the annotator quality, the uncertainty over ordering of the pair, and the current model uncertainty. We formalize this as an active learning strategy that incorporates an explorationexploitation tradeoff and implement it using an efficient online Bayesian updating scheme. Using simulated and realworld data, we demonstrate that the active learning strategy achieves significant reductions in labeling cost while maintaining accuracy.
Sample Size Selection in Optimization Methods for Machine Learning
, 2012
"... This paper presents a methodology for using varying sample sizes in batchtype optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
This paper presents a methodology for using varying sample sizes in batchtype optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an O(1/ɛ) complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vectorproducts than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L1 regularized problems designed to produce sparse solutions. We propose a Newtonlike method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.
Superlinear and Quadratic Convergence of AffineScaling InteriorPoint Newton Methods for Problems with Simple Bounds without Strict Complementarity Assumption
, 1998
"... A class of affinescaling interiorpoint methods for bound constrained optimization problems is introduced which are locally qsuperlinear or qquadratic convergent. It is assumed that the strong... ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
A class of affinescaling interiorpoint methods for bound constrained optimization problems is introduced which are locally qsuperlinear or qquadratic convergent. It is assumed that the strong...