Results 1 
6 of
6
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
The Performance of the Modified Subgradient Algorithm on Solving the 0–1 Quadratic Knapsack Problem
, 2008
"... Abstract. In this study, the performance of the modified subgradient algorithm (MSG) to solve the 0–1 quadratic knapsack problem (QKP) was examined. The MSG was proposed by Gasimov for solving dual problems constructed with respect to sharp Augmented Lagrangian function. The MSG has some important p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this study, the performance of the modified subgradient algorithm (MSG) to solve the 0–1 quadratic knapsack problem (QKP) was examined. The MSG was proposed by Gasimov for solving dual problems constructed with respect to sharp Augmented Lagrangian function. The MSG has some important proven properties. For example, it is convergent, and it guarantees zero duality gap for the problems such that its objective and constraint functions are all Lipschtz. Additionally, the MSG has been successfully used for solving nonconvex continuous and some combinatorial problems with equality constraints since it was first proposed. In this study, the MSG was used to solve the QKP which has an inequality constraint. The first step in solving the problem was converting zeroone nonlinear QKP problem into continuous nonlinear problem by adding only one constraint and not adding any new variables. Second, in order to solve the continuous QKP, dual problem with ”zero duality gap ” was constructed by using the sharp Augmented Lagrangian function. Finally, the MSG was used to solve the dual problem, by considering the equality constraint in the computation of the norm. To compare the performance of the MSG with some other methods, some test instances from the relevant literature were solved both by using the MSG and by using three different MINLP solvers of GAMS software. The results obtained were presented and discussed.
[15] J.E. Fonde, “Delay Differential Equation Models in Mathematical Bi
, 2002
"... scriptor approach to sliding mode control of systems with timevarying ..."
(Show Context)
A Deflected Subgradient Method Using a General Augmented Lagrangian Duality With Implications on Penalty Methods
, 2009
"... We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems. Our approach is to use a new variant of the deflected subgradient method for solving the dual problem. Our augmented Lagrangian function induces a primaldual method with strong duality, i.e., with z ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems. Our approach is to use a new variant of the deflected subgradient method for solving the dual problem. Our augmented Lagrangian function induces a primaldual method with strong duality, i.e., with zero duality gap. We prove that our method converges to a dual solution if and only if a dual solution exists. We also prove that all accumulation points of an auxiliary primal sequence are primal solutions. Our results apply, in particular, to classical penalty methods, since the penalty functions associated with these methods can be recovered as a special case of our augmented Lagrangians. Besides the classical augmenting terms given by the 1 or 2norm forms, terms of many other forms can be used in our Lagrangian function. Using a practical selection of the stepsize parameters, as well as various choices of the augmenting term, we demonstrate the method on test problems. Our numerical experiments indicate that it is more favourable to use an augmenting term of an exponential form rather than the classical 1 or 2norm forms.
Optimization Over the Efficient Set of Multiobjective Convex Optimal Control Problems
, 2010
"... Abstract We consider multiobjective convex optimal control problems. First we state a relationship between the (weakly or properly) efficient set of the multiobjective problem and the solution of the problem scalarized via a convex combination of objectives through a vector of parameters (or weigh ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We consider multiobjective convex optimal control problems. First we state a relationship between the (weakly or properly) efficient set of the multiobjective problem and the solution of the problem scalarized via a convex combination of objectives through a vector of parameters (or weights). Then we establish that (i) the solution of the scalarized (parametric) problem for any given parameter vector is unique and (weakly or properly) efficient and (ii) for each solution in the (weakly or properly) efficient set, there exists at least one corresponding parameter vector for the scalarized problem yielding the same solution. Therefore the set of all parametric solutions (obtained by solving the scalarized problem) is equal to the efficient set. Next we consider an additional objective over the efficient set. Based on the main result, the new objective can instead be considered over the (parametric) solution set of the scalarized problem. For the purpose of constructing numerical methods, we point to existing solution differentiability results for parametric optimal control problems. We propose numerical methods and give an example application to illustrate our approach.
ON GENERAL AUGMENTED LAGRANGIANS AND A MODIFIED SUBGRADIENT ALGORITHM
, 2009
"... In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a sharp Lagrangian in finite dimensional spaces. An inexact versi ..."
Abstract
 Add to MetaCart
In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a sharp Lagrangian in finite dimensional spaces. An inexact version of the algorithm is extended to infinite dimensional spaces and we apply it to a dual problem of an extended realvalued optimization problem. The dual problem is constructed via augmented Lagrangians which include sharp Lagrangian as a particular case. The sequences generated by these algorithms converge to a dual solution when the dual optimal solution set is nonempty. They have the property that all accumulation points of a primal sequence, obtained without extra cost, are primal solutions. We relate the convergence properties of these modified subgradient algorithms to differentiability of the dual function at a dual solution, and exact penalty property of these augmented Lagrangians. In the second part of this thesis, we propose and analyze a general augmented Lagrangian function, which includes several augmented Lagrangians considered in the literature. In this more general setting, we study a zero duality gap property, exact penalization and convergence of a suboptimal path related to the dual problem.