Results 1 
6 of
6
Nonmonotone spectral projected gradient methods on convex sets
 SIAM Journal on Optimization
, 2000
"... Abstract. Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical projected gradient schemes are extended to include a nonmonotone steplength strategy that is based on the Grippo–Lampariello–Lucidi nonmonotone lin ..."
Abstract

Cited by 135 (25 self)
 Add to MetaCart
Abstract. Nonmonotone projected gradient techniques are considered for the minimization of differentiable functions on closed convex sets. The classical projected gradient schemes are extended to include a nonmonotone steplength strategy that is based on the Grippo–Lampariello–Lucidi nonmonotone line search. In particular, the nonmonotone strategy is combined with the spectral gradient choice of steplength to accelerate the convergence process. In addition to the classical projected gradient nonlinear path, the feasible spectral projected gradient is used as a search direction to avoid additional trial projections during the onedimensional search process. Convergence properties and extensive numerical results are presented.
SPG: Software for ConvexConstrained Optimization
, 2001
"... this paper we describe Fortran 77 software that implements the nonmonotone spectral projected gradient (SPG) algorithm. The SPG method applies to problems of the form min f(x) subject to x 2 ; where is a closed convex set in IR n . It is assumed that f is dened and has continuous partial deriva ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
this paper we describe Fortran 77 software that implements the nonmonotone spectral projected gradient (SPG) algorithm. The SPG method applies to problems of the form min f(x) subject to x 2 ; where is a closed convex set in IR n . It is assumed that f is dened and has continuous partial derivatives on an open set that contains Users of the software must supply subroutines to compute the function f(x), the gradient rf(x) and projections of an arbitrary point x onto Information about the Hessian matrix is not required and the storage requirements are minimal. Therefore, the algorithm is appropriate for largescale convexconstrained optimization problems with aordable projections onto the feasible set. Notice that the algorithm is also suitable for unconstrained optimization problems simply by setting = IR n
On the convergence properties of the projected gradient method for convex optimization
 Comput. Appl. Math
"... Abstract. When applied to an unconstrained minimization problem with a convex objective, the steepest descent method has stronger convergence properties than in the noncovex case: the whole sequence converges to an optimal solution under the only hypothesis of existence of minimizers (i.e. without a ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. When applied to an unconstrained minimization problem with a convex objective, the steepest descent method has stronger convergence properties than in the noncovex case: the whole sequence converges to an optimal solution under the only hypothesis of existence of minimizers (i.e. without assuming e.g. boundedness of the level sets). In this paper we look at the projected gradient method for constrained convex minimization. Convergence of the whole sequence to a minimizer assuming only existence of solutions has also been already established for the variant in which the stepsizes are exogenously given and square summable. In this paper, we prove the result for the more standard (and also more efficient) variant, namely the one in which the stepsizes are determined through an Armijo search. Mathematical subject classification: 90C25, 90C30. Key words: projected gradient method, convex optimization, quasiFejér convergence.
Partial Spectral Projected Gradient Method with ActiveSet Strategy for Linearly Constrained Optimization
, 2009
"... A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted t ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithms. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango
Gradient method with dynamical retards for largescale optimization problems
 Electronic Transactions on Numerical Analysis (ETNA
, 2003
"... Abstract. We consider a generalization of the gradient method with retards for the solution of largescale unconstrained optimization problems. Recently, the gradient method with retards was introduced to find global minimizers of largescale quadratic functions. The most interesting feature of this ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. We consider a generalization of the gradient method with retards for the solution of largescale unconstrained optimization problems. Recently, the gradient method with retards was introduced to find global minimizers of largescale quadratic functions. The most interesting feature of this method is that it does not involve a decrease in the objective function, which allows fast local convergence. On the other hand, nonmonotone globalization strategies, that preserve local behavior for the nonquadratic case, have proved to be very effective when associated with low storage methods. In this work, the gradient method with retards is generalized and combined in a dynamical way with nonmonotone globalization strategies to obtain a new method for minimizing nonquadratic functions, that can deal efficiently with large problems. Encouraging numerical experiments on wellknown test problems are presented. Key words. spectral gradient method, nonmonotone line search, BarzilaiBorwein method, PolakRibière method, Rayleigh quotient.
Minimization Subproblems and Heuristics for an Applied Clustering Problem
, 2001
"... A practical problem that requires the classification of a set of points of R^n using a criterion not sensitive to bounded outliers is studied in this paper. A fixedpoint (kmeans) algorithm is defined that uses an arbitrary distance function. Finite convergence is proved. A robust distance defined ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A practical problem that requires the classification of a set of points of R^n using a criterion not sensitive to bounded outliers is studied in this paper. A fixedpoint (kmeans) algorithm is defined that uses an arbitrary distance function. Finite convergence is proved. A robust distance defined by Boente, Fraiman and Yohai is selected for applications. Smooth approximations of this distance are defined and suitable heuristics are introduced to enhance the probability of finding global optimizers. A reallife example is presented and commented.