Results 1  10
of
129
Sparse Reconstruction by Separable Approximation
, 2008
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing ( ..."
Abstract

Cited by 168 (27 self)
 Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing (CS) are a few wellknown areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsityinducing (usually ℓ1) regularization term. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (which is therefore separable in the unknowns) plus the original sparsityinducing regularizer. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our framework yields an efficient solution technique for other regularizers, such as an ℓ∞norm regularizer and groupseparable (GS) regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard ℓ2 − ℓ1 problem, as well as being efficient on problems with other separable regularization terms.
The PATH Solver: A NonMonotone Stabilization Scheme for Mixed Complementarity Problems
 OPTIMIZATION METHODS AND SOFTWARE
, 1995
"... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptan ..."
Abstract

Cited by 149 (33 self)
 Add to MetaCart
The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptance criterion and a nonmonotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given.
Nonmonotone spectral projected gradient methods on convex sets
 SIAM Journal on Optimization
, 2000
"... Abstract. Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical projected gradient schemes are extended to include a nonmonotone steplength strategy that is based on the Grippo–Lampariello–Lucidi nonmonotone lin ..."
Abstract

Cited by 133 (25 self)
 Add to MetaCart
Abstract. Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical projected gradient schemes are extended to include a nonmonotone steplength strategy that is based on the Grippo–Lampariello–Lucidi nonmonotone line search. In particular, the nonmonotone strategy is combined with the spectral gradient choice of steplength to accelerate the convergence process. In addition to the classical projected gradient nonlinear path, the feasible spectral projected gradient is used as a search direction to avoid additional trial projections during the onedimensional search process. Convergence properties and extensive numerical results are presented.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
A Semismooth Equation Approach To The Solution Of Nonlinear Complementarity Problems
, 1995
"... In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that the n ..."
Abstract

Cited by 79 (9 self)
 Add to MetaCart
In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that the natural merit function associated to the equation reformulation is continuously differentiable to develop an algorithm whose global and quadratic convergence properties can be established under very mild assumptions. Other interesting features of the new algorithm are an extreme simplicity along with a low computational burden per iteration. We include numerical tests which show the viability of the approach.
User's Guide for CFSQP Version 2.5: A C Code for Solving (Large Scale) Constrained Nonlinear (Minimax) Optimization Problems, Generating Iterates Satisfying All Inequality Constraints
, 1997
"... CFSQP is a set of C functions for the minimization of the maximum of a set of smooth objective functions (possibly a single one, or even none at all) subject to general smooth constraints (if there is no objective function, the goal is to simply find a point satisfying the constraints). If the initi ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
CFSQP is a set of C functions for the minimization of the maximum of a set of smooth objective functions (possibly a single one, or even none at all) subject to general smooth constraints (if there is no objective function, the goal is to simply find a point satisfying the constraints). If the initial guess provided by the user is infeasible for some inequality constraint or some linear equality constraint, CFSQP first generates a feasible point for these constraints; subsequently the successive iterates generated by CFSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints (to be satisfied by all iterates) and the maximum of the objective functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. When solving problems with many sequentially related constraints (or objectives), such as discretized semiinfinite programming (SIP) problems, CFSQP gives the user the option to use an algo...
A Penalized FischerBurmeister NcpFunction: Theoretical Investigation And Numerical Results
, 1997
"... We introduce a new NCPfunction that reformulates a nonlinear complementarity problem as a system of semismooth equations \Phi(x) = 0. The new NCPfunction possesses all the nice properties of the FischerBurmeister function for local convergence. In addition, its natural merit function \Psi(x) = ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
We introduce a new NCPfunction that reformulates a nonlinear complementarity problem as a system of semismooth equations \Phi(x) = 0. The new NCPfunction possesses all the nice properties of the FischerBurmeister function for local convergence. In addition, its natural merit function \Psi(x) = 1 2 \Phi(x) T \Phi(x) has all the nice features of the KanzowYamashitaFukushima merit function for global convergence. In particular, the merit function has bounded level sets for a monotone complementarity problem with a strictly feasible point. This property allows the existing semismooth Newton methods to solve this important class of complementarity problems without additional assumptions. We investigate the properties of a semismooth Newtontype method based on the new NCPfunction and apply the method to a large class of complementarity problems. The numerical results indicate that the new algorithm is extremely promising.
Inexact Spectral Projected Gradient methods on convex sets
 IMA Journal on Numerical Analysis
, 2003
"... A new method is introduced for large scale convex constrained optimization. The general model algorithm involves, at each iteration, the approximate minimization of a convex quadratic on the feasible set of the original problem and global convergence is obtained by means of nonmonotone line searches ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
A new method is introduced for large scale convex constrained optimization. The general model algorithm involves, at each iteration, the approximate minimization of a convex quadratic on the feasible set of the original problem and global convergence is obtained by means of nonmonotone line searches. A specific algorithm, the Inexact Spectral Projected Gradient method (ISPG), is implemented using inexact projections computed by Dykstra’s alternating projection method and generates interior iterates. The ISPG method is a generalization of the Spectral Projected Gradient method (SPG), but can be used when projections are difficult to compute. Numerical results for constrained leastsquares rectangular matrix problems are presented. Key words: Convex constrained optimization, projected gradient, nonmonotone line search, spectral gradient, Dykstra’s algorithm. AMS Subject Classification: 49M07, 49M10, 65K, 90C06, 90C20. 1
A Semismooth Newton Method For Variational Inequalities: Theoretical Results And Preliminary Numerical Experience
, 1997
"... Variational inequalities over sets defined by systems of equalities and inequalities are considered. A continuously differentiable merit function is proposed whose unconstrained minima coincide with the KKTpoints of the variational inequality. A detailed study of its properties is carried out showi ..."
Abstract

Cited by 34 (11 self)
 Add to MetaCart
Variational inequalities over sets defined by systems of equalities and inequalities are considered. A continuously differentiable merit function is proposed whose unconstrained minima coincide with the KKTpoints of the variational inequality. A detailed study of its properties is carried out showing that under mild assumptions this reformulation possesses many desirable features. A simple algorithm is proposed for which it is possible to prove global convergence and a fast local convergence rate. Preliminary numerical results showing viability of the approach are reported.
On the convergence of a sequential quadratic programming method with an augmented Lagrangian line search function
 Math. Operstionsforschung und Statistik, Ser. Optimization
, 1983
"... Sequential quadratic programming (SQP) methods are widely used for solving practical optimization problems, especially in structural mechanics. The general structure of SQP methods is briefly introduced and it is shown how these methods can be adapted to distributed computing. However, SQP methods a ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods are widely used for solving practical optimization problems, especially in structural mechanics. The general structure of SQP methods is briefly introduced and it is shown how these methods can be adapted to distributed computing. However, SQP methods are sensitive subject to errors in function and gradient evaluations. Typically they break down with an error message reporting that the line search cannot be terminated successfully. In these cases, a new nonmonotone line search is activated. In case of noisy function values, a drastic improvement of the performance is achieved compared to the version with monotone line search. Numerical results are presented for a set of more than 300 standard test examples.