Results 1 
7 of
7
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
Improving traditional subgradient scheme for Lagrangean relaxation: an application to location problems
 International Journal of Mathematical Algorithms
, 1999
"... Lagrangean relaxation is largely used to solve combinatorial optimization problems. A known problem for Lagrangean relaxation application is the definition of convenient step size control in subgradient like methods. Even preserving theoretical convergence properties, a wrong defined control can ref ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Lagrangean relaxation is largely used to solve combinatorial optimization problems. A known problem for Lagrangean relaxation application is the definition of convenient step size control in subgradient like methods. Even preserving theoretical convergence properties, a wrong defined control can reflect in performance and increase computational times, a critical point in large scale instances. We show in this work how to accelerate a classical subgradient method, using the local information of the surrogate constraints relaxed in the Lagrangean relaxation. It results in a onedimensional search that corrects the step size and is independent of the step size control used. The application to Capacitated and Uncapacitated Facility Location problems is shown. Several computational tests confirm the superiority of this scheme. Key words: Location problems, Lagrangean relaxation, Subgradient method. 1. Introduction Facility location is the problem of locating a number of facilities from a s...
The volume algorithm: producing . . .
 MATH. PROGRAM., SER. A 87: 385–399 (2000)
, 2000
"... We present an extension to the subgradient algorithm to produce primal as well as dual solutions. It can be seen as a fast way to carry out an approximation of DantzigWolfe decomposition. This gives a fast method for producing approximations for large scale linear programs. It is based on a new the ..."
Abstract
 Add to MetaCart
We present an extension to the subgradient algorithm to produce primal as well as dual solutions. It can be seen as a fast way to carry out an approximation of DantzigWolfe decomposition. This gives a fast method for producing approximations for large scale linear programs. It is based on a new theorem in linear programming duality. We present successful experience with linear programs coming from set partitioning, set covering, maxcut and plant location. Key words. subgradient algorithm – DantzigWolfe decomposition – large scale linear programming
ON GENERAL AUGMENTED LAGRANGIANS AND A MODIFIED SUBGRADIENT ALGORITHM
, 2009
"... ii To my wife Leila, my daughter Yasmim and all my family iii iv In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a ..."
Abstract
 Add to MetaCart
ii To my wife Leila, my daughter Yasmim and all my family iii iv In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a sharp Lagrangian in finite dimensional spaces. An inexact version of the algorithm is extended to infinite dimensional spaces and we apply it to a dual problem of an extended realvalued optimization problem. The dual problem is constructed via augmented Lagrangians which include sharp Lagrangian as a particular case. The sequences generated by these algorithms converge to a dual solution when the dual optimal solution set is nonempty. They have the property that all accumulation points of a primal sequence, obtained without extra cost, are primal solutions. We relate the convergence properties of these modified subgradient algorithms to differentiability of the dual function at a dual solution, and exact penalty property of these augmented Lagrangians. In the second part of this thesis, we propose and analyze a general augmented Lagrangian function, which includes several augmented Lagrangians considered in the literature. In this more general setting, we study a zero duality gap property, exact penalization and convergence of a suboptimal path related to the dual problem.
Using Logical Surrogate Information In Lagrangean Relaxation: An Application To Symmetric Traveling Salesman Problems
"... The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem, which has been intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound ..."
Abstract
 Add to MetaCart
The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem, which has been intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound for new heuristics, mainly for the very large scale instances, where the use of exact methods is prohibitive. A known problem for the Lagrangean relaxation application is the definition of a convenient step size control in subgradient like methods. Even preserving theoretical convergence properties, a wrong defined control can affect the performance and increase computational times. We show in this work how to accelerate a classical subgradient method while conserving good approximations to the HK bounds. The surrogate and Lagrangean relaxation are combined using the local information of the relaxed constraints. It results in a onedimensional search that corrects the possibly wrong step size and is independent of the used step size control. Comparing with the ordinary subgradient method, and beginning with the same initial multiplier, the computational times are almost twice as fast for medium instances and greatly improved for some large scale TSPLIB instances. Key words: Lagrangean/surrogate relaxation, Traveling Salesman Problem, Subgradient method. 1.
Using Local Surrogate Information In Lagrangean Relaxation: An Application To Symmetric Traveling Salesman Problems
"... The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% fr ..."
Abstract
 Add to MetaCart
The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound for new heuristics, mainly for the very large scale instances, where the use of exact methods is prohibitive. A known problem for the Lagrangean relaxation application is the definition of a convenient step size control in subgradient like methods. Even preserving theoretical convergence properties, a wrong defined control can reflect in performance and increase computational times, a critical point for the large scale instances. We show in this work how to accelerate a classical subgradient method while conserving good approximations to the HK bounds. The surrogate and Lagrangean relaxation are combined using ...
Using Local Surrogate Information In Lagrangean Relaxation: An Application To Symmetric Traveling Salesman Problems
"... this paper the effects of local search on Lagrangean relaxation applied to symmetric TSP. The local search was simply justified considering the Lagrangean multipliers as surrogate multipliers, affected by a local onedimensional Lagrangean dual. The local search can be a straight one, giving in few ..."
Abstract
 Add to MetaCart
this paper the effects of local search on Lagrangean relaxation applied to symmetric TSP. The local search was simply justified considering the Lagrangean multipliers as surrogate multipliers, affected by a local onedimensional Lagrangean dual. The local search can be a straight one, giving in few iterations a better onedimensional multiplier than the usual Lagrangean multiplier (fixed in one). We hope that the Lagrangean/surrogate approach can be useful for even large scale TSP instances, considering the importance of HK bounds for heuristic performance comparison [20, 21]. It is also important to note that the refereed approach is independent of the step size and subgradient direction used (if the convergence conditions were observed).