Results 1  10
of
21
A survey of nonlinear conjugate gradient methods
 Pacific Journal of Optimization
, 2006
"... Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties. ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Abstract. This paper reviews the development of different versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties.
A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation
 SIAM Journal on Scientific Computing
, 2010
"... Abstract. We propose a fast algorithm for solving the ℓ1regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Abstract. We propose a fast algorithm for solving the ℓ1regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a firstorder iterative method called “shrinkage ” yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the ℓ1norm ‖x‖1 to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic twostage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to µ. This code exhibits stateoftheart performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.
Secondorder negativecurvature methods for boxconstrained and general constrained optimization
, 2009
"... A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (PowellHestenesRockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to secondorder stationary points in situations in which firstorder methods fail are exhibited.
Improving ultimate convergence of an Augmented Lagrangian method
, 2007
"... Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptoticall ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:
An Algorithm for the Fast Solution of Symmetric Linear Complementarity Problems
, 2008
"... This paper studies algorithms for the solution of mixed symmetric linear complementarity problems. The goal is to compute fast and approximate solutions of medium to large sized problems, such as those arising in computer game simulations and American options pricing. The paper proposes an improveme ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper studies algorithms for the solution of mixed symmetric linear complementarity problems. The goal is to compute fast and approximate solutions of medium to large sized problems, such as those arising in computer game simulations and American options pricing. The paper proposes an improvement of a method described by Kocvara and Zowe [19] that combines projected GaussSeidel iterations with subspace minimization steps. The proposed algorithm employs a recursive subspace minimization designed to handle severely illconditioned problems. Numerical tests indicate that the approach is more efficient than interiorpoint and gradient projection methods on some physical simulation problems that arise in computer game scenarios.
Sample Size Selection in Optimization Methods for Machine Learning
, 2012
"... This paper presents a methodology for using varying sample sizes in batchtype optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper presents a methodology for using varying sample sizes in batchtype optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an O(1/ɛ) complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vectorproducts than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L1 regularized problems designed to produce sparse solutions. We propose a Newtonlike method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.
Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization
, 2010
"... At each outer iteration of standard Augmented Lagrangian methods one tries to solve a boxconstrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resol ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a boxconstrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the boxconstraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented
ON THE CONVERGENCE OF AN ACTIVE SET METHOD FOR ℓ1 MINIMIZATION
"... Abstract. We analyze an abridged version of the activeset algorithm FPC AS proposed in [18] for solving the l1regularized problem, i.e., a weighted sum of the l1norm ‖x‖1 and a smooth function f(x). The active set algorithm alternatively iterates between two stages. In the first “nonmonotone line ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We analyze an abridged version of the activeset algorithm FPC AS proposed in [18] for solving the l1regularized problem, i.e., a weighted sum of the l1norm ‖x‖1 and a smooth function f(x). The active set algorithm alternatively iterates between two stages. In the first “nonmonotone line search (NMLS) ” stage, an iterative firstorder method based on “shrinkage” is used to estimate the support at the solution. In the second “subspace optimization ” stage, a smaller smooth problem is solved to recover the magnitudes of the nonzero components of x. We show that NMLS itself is globally convergent and the convergence rate is at least Rlinearly. In particular, NMLS is able to identify of the zero components of a stationary point after a finite number of steps under some mild conditions. The global convergence of FPC AS is established based on the properties
Proximal Methods for Nonlinear Programming: Double Regularization and Inexact Subproblems
, 2008
"... ..."
AN ELLIPSOIDAL BRANCH AND BOUND ALGORITHM FOR GLOBAL OPTIMIZATION ∗
"... Abstract. A branch and bound algorithm is developed for global optimization. Branching in the algorithm is accomplished by subdividing the feasible set using ellipses. Lower bounds are obtained by replacing the concave part of the objective function by an affine underestimate. A ball approximation a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. A branch and bound algorithm is developed for global optimization. Branching in the algorithm is accomplished by subdividing the feasible set using ellipses. Lower bounds are obtained by replacing the concave part of the objective function by an affine underestimate. A ball approximation algorithm, obtained by generalizing of a scheme of Lin and Han, is used to solve the convex relaxation of the original problem. The ball approximation algorithm is compared to SEDUMI as well as to gradient projection algorithms using randomly generated test problems with a quadratic objective and ellipsoidal constraints.