Results 1 
3 of
3
Multiplier Methods: A Survey
, 1976
"... The purpose of this paper is to provide a survey of convergence and rate of convergence aspects of a cltass of recently proposed methods for constrained nfinimizationthe, socalled, multiplier methods. The results discussed highlight the operational aspects of multiplier methods and demonstrate th ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The purpose of this paper is to provide a survey of convergence and rate of convergence aspects of a cltass of recently proposed methods for constrained nfinimizationthe, socalled, multiplier methods. The results discussed highlight the operational aspects of multiplier methods and demonstrate their significant advantages over ordinary penalty methods.
On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian
 Journal of Global Optimization
, 2006
"... We study convergence properties of a modified subgradient algorithm, applied to the dual problem defined by the sharp augmented Lagrangian. The primal problem we consider is nonconvex and nondifferentiable, with equality constraints. We obtain primal and dual convergence results, as well as a condit ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We study convergence properties of a modified subgradient algorithm, applied to the dual problem defined by the sharp augmented Lagrangian. The primal problem we consider is nonconvex and nondifferentiable, with equality constraints. We obtain primal and dual convergence results, as well as a condition for existence of a dual solution. Using a practical selection of the stepsize parameters, we demonstrate the algorithm and its advantages on test problems, including an integer programming and an optimal control problem. Key words: Nonconvex programming; nonsmooth optimization; augmented Lagrangian; sharp Lagrangian; subgradient optimization.
A Deflected Subgradient Method Using a General Augmented Lagrangian Duality With Implications on Penalty Methods
, 2009
"... We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems. Our approach is to use a new variant of the deflected subgradient method for solving the dual problem. Our augmented Lagrangian function induces a primaldual method with strong duality, i.e., with z ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems. Our approach is to use a new variant of the deflected subgradient method for solving the dual problem. Our augmented Lagrangian function induces a primaldual method with strong duality, i.e., with zero duality gap. We prove that our method converges to a dual solution if and only if a dual solution exists. We also prove that all accumulation points of an auxiliary primal sequence are primal solutions. Our results apply, in particular, to classical penalty methods, since the penalty functions associated with these methods can be recovered as a special case of our augmented Lagrangians. Besides the classical augmenting terms given by the 1 or 2norm forms, terms of many other forms can be used in our Lagrangian function. Using a practical selection of the stepsize parameters, as well as various choices of the augmenting term, we demonstrate the method on test problems. Our numerical experiments indicate that it is more favourable to use an augmenting term of an exponential form rather than the classical 1 or 2norm forms.