Results 1 
5 of
5
CONVERGENCE ANALYSIS OF DEFLECTED CONDITIONAL APPROXIMATE SUBGRADIENT METHODS
, 2009
"... Subgradient methods for nondifferentiable optimization benefit from deflection, i.e., defining the search direction as a combination of the previous direction and the current subgradient. In the constrained case they also benefit from projection of the search direction onto the feasible set prior to ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Subgradient methods for nondifferentiable optimization benefit from deflection, i.e., defining the search direction as a combination of the previous direction and the current subgradient. In the constrained case they also benefit from projection of the search direction onto the feasible set prior to computing the steplength, that is, from the use of conditional subgradient techniques. However, combining the two techniques is not straightforward, especially if an inexact oracle is available which can only compute approximate function values and subgradients. We present a convergence analysis of several different variants, both conceptual and implementable, of approximate conditional deflected subgradient methods. Our analysis extends the available results in the literature by using the main stepsize rules presented so far, while allowing deflection in a more flexible way. Furthermore, to allow for (diminishing/square summable) rules where the stepsize is tightly controlled a priori, we propose a new class of deflectionrestricted approaches where it is the deflection parameter, rather than the stepsize, which is dynamically adjusted using the “target value ” of the optimization sequence. For both Polyaktype and diminishing/square summable stepsizes, we propose a “correction ” of the standard formula which shows that, in the inexact case, knowledge about the error computed by the oracle (which is available in several practical applications) can be exploited in order to strengthen the convergence properties of the method. The analysis allows for several variants of the algorithm; at least one of them is likely to show numerical performances similar to these of “heavy ball ” subgradient methods, popular within backpropagation approaches to train neural networks, while possessing stronger convergence properties.
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
An Update Rule and a Convergence Result for a Penalty Function Method
, 2007
"... We use a primaldual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain ..."
Abstract
 Add to MetaCart
We use a primaldual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results. Key words: Penalty function method, penalty parameter update, least exact penalty parameter, duality, nonsmooth optimization, nonconvex optimization. Mathematical Subject Classification: 49M30; 49M29; 49M37; 90C26; 90C30. 1
ON GENERAL AUGMENTED LAGRANGIANS AND A MODIFIED SUBGRADIENT ALGORITHM
, 2009
"... ii To my wife Leila, my daughter Yasmim and all my family iii iv In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a ..."
Abstract
 Add to MetaCart
ii To my wife Leila, my daughter Yasmim and all my family iii iv In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a sharp Lagrangian in finite dimensional spaces. An inexact version of the algorithm is extended to infinite dimensional spaces and we apply it to a dual problem of an extended realvalued optimization problem. The dual problem is constructed via augmented Lagrangians which include sharp Lagrangian as a particular case. The sequences generated by these algorithms converge to a dual solution when the dual optimal solution set is nonempty. They have the property that all accumulation points of a primal sequence, obtained without extra cost, are primal solutions. We relate the convergence properties of these modified subgradient algorithms to differentiability of the dual function at a dual solution, and exact penalty property of these augmented Lagrangians. In the second part of this thesis, we propose and analyze a general augmented Lagrangian function, which includes several augmented Lagrangians considered in the literature. In this more general setting, we study a zero duality gap property, exact penalization and convergence of a suboptimal path related to the dual problem.
The Performance of the Modified Subgradient Algorithm on Solving the 0–1 Quadratic Knapsack Problem
, 2008
"... Abstract. In this study, the performance of the modified subgradient algorithm (MSG) to solve the 0–1 quadratic knapsack problem (QKP) was examined. The MSG was proposed by Gasimov for solving dual problems constructed with respect to sharp Augmented Lagrangian function. The MSG has some important p ..."
Abstract
 Add to MetaCart
Abstract. In this study, the performance of the modified subgradient algorithm (MSG) to solve the 0–1 quadratic knapsack problem (QKP) was examined. The MSG was proposed by Gasimov for solving dual problems constructed with respect to sharp Augmented Lagrangian function. The MSG has some important proven properties. For example, it is convergent, and it guarantees zero duality gap for the problems such that its objective and constraint functions are all Lipschtz. Additionally, the MSG has been successfully used for solving nonconvex continuous and some combinatorial problems with equality constraints since it was first proposed. In this study, the MSG was used to solve the QKP which has an inequality constraint. The first step in solving the problem was converting zeroone nonlinear QKP problem into continuous nonlinear problem by adding only one constraint and not adding any new variables. Second, in order to solve the continuous QKP, dual problem with ”zero duality gap ” was constructed by using the sharp Augmented Lagrangian function. Finally, the MSG was used to solve the dual problem, by considering the equality constraint in the computation of the norm. To compare the performance of the MSG with some other methods, some test instances from the relevant literature were solved both by using the MSG and by using three different MINLP solvers of GAMS software. The results obtained were presented and discussed.