Results 1 
8 of
8
CONVERGENCE ANALYSIS OF DEFLECTED CONDITIONAL APPROXIMATE SUBGRADIENT METHODS
, 2009
"... Subgradient methods for nondifferentiable optimization benefit from deflection, i.e., defining the search direction as a combination of the previous direction and the current subgradient. In the constrained case they also benefit from projection of the search direction onto the feasible set prior to ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Subgradient methods for nondifferentiable optimization benefit from deflection, i.e., defining the search direction as a combination of the previous direction and the current subgradient. In the constrained case they also benefit from projection of the search direction onto the feasible set prior to computing the steplength, that is, from the use of conditional subgradient techniques. However, combining the two techniques is not straightforward, especially if an inexact oracle is available which can only compute approximate function values and subgradients. We present a convergence analysis of several different variants, both conceptual and implementable, of approximate conditional deflected subgradient methods. Our analysis extends the available results in the literature by using the main stepsize rules presented so far, while allowing deflection in a more flexible way. Furthermore, to allow for (diminishing/square summable) rules where the stepsize is tightly controlled a priori, we propose a new class of deflectionrestricted approaches where it is the deflection parameter, rather than the stepsize, which is dynamically adjusted using the “target value ” of the optimization sequence. For both Polyaktype and diminishing/square summable stepsizes, we propose a “correction ” of the standard formula which shows that, in the inexact case, knowledge about the error computed by the oracle (which is available in several practical applications) can be exploited in order to strengthen the convergence properties of the method. The analysis allows for several variants of the algorithm; at least one of them is likely to show numerical performances similar to these of “heavy ball ” subgradient methods, popular within backpropagation approaches to train neural networks, while possessing stronger convergence properties.
An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗
, 2008
"... We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.
The Performance of the Modified Subgradient Algorithm on Solving the 0–1 Quadratic Knapsack Problem
, 2008
"... Abstract. In this study, the performance of the modified subgradient algorithm (MSG) to solve the 0–1 quadratic knapsack problem (QKP) was examined. The MSG was proposed by Gasimov for solving dual problems constructed with respect to sharp Augmented Lagrangian function. The MSG has some important p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this study, the performance of the modified subgradient algorithm (MSG) to solve the 0–1 quadratic knapsack problem (QKP) was examined. The MSG was proposed by Gasimov for solving dual problems constructed with respect to sharp Augmented Lagrangian function. The MSG has some important proven properties. For example, it is convergent, and it guarantees zero duality gap for the problems such that its objective and constraint functions are all Lipschtz. Additionally, the MSG has been successfully used for solving nonconvex continuous and some combinatorial problems with equality constraints since it was first proposed. In this study, the MSG was used to solve the QKP which has an inequality constraint. The first step in solving the problem was converting zeroone nonlinear QKP problem into continuous nonlinear problem by adding only one constraint and not adding any new variables. Second, in order to solve the continuous QKP, dual problem with ”zero duality gap ” was constructed by using the sharp Augmented Lagrangian function. Finally, the MSG was used to solve the dual problem, by considering the equality constraint in the computation of the norm. To compare the performance of the MSG with some other methods, some test instances from the relevant literature were solved both by using the MSG and by using three different MINLP solvers of GAMS software. The results obtained were presented and discussed.
An Update Rule and a Convergence Result for a Penalty Function Method
, 2007
"... We use a primaldual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain ..."
Abstract
 Add to MetaCart
(Show Context)
We use a primaldual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results. Key words: Penalty function method, penalty parameter update, least exact penalty parameter, duality, nonsmooth optimization, nonconvex optimization. Mathematical Subject Classification: 49M30; 49M29; 49M37; 90C26; 90C30. 1
A Deflected Subgradient Method Using a General Augmented Lagrangian Duality With Implications on Penalty Methods
, 2009
"... We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems. Our approach is to use a new variant of the deflected subgradient method for solving the dual problem. Our augmented Lagrangian function induces a primaldual method with strong duality, i.e., with z ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a duality scheme for solving constrained nonsmooth and nonconvex optimization problems. Our approach is to use a new variant of the deflected subgradient method for solving the dual problem. Our augmented Lagrangian function induces a primaldual method with strong duality, i.e., with zero duality gap. We prove that our method converges to a dual solution if and only if a dual solution exists. We also prove that all accumulation points of an auxiliary primal sequence are primal solutions. Our results apply, in particular, to classical penalty methods, since the penalty functions associated with these methods can be recovered as a special case of our augmented Lagrangians. Besides the classical augmenting terms given by the 1 or 2norm forms, terms of many other forms can be used in our Lagrangian function. Using a practical selection of the stepsize parameters, as well as various choices of the augmenting term, we demonstrate the method on test problems. Our numerical experiments indicate that it is more favourable to use an augmenting term of an exponential form rather than the classical 1 or 2norm forms.
Optimization Over the Efficient Set of Multiobjective Convex Optimal Control Problems
, 2010
"... Abstract We consider multiobjective convex optimal control problems. First we state a relationship between the (weakly or properly) efficient set of the multiobjective problem and the solution of the problem scalarized via a convex combination of objectives through a vector of parameters (or weigh ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We consider multiobjective convex optimal control problems. First we state a relationship between the (weakly or properly) efficient set of the multiobjective problem and the solution of the problem scalarized via a convex combination of objectives through a vector of parameters (or weights). Then we establish that (i) the solution of the scalarized (parametric) problem for any given parameter vector is unique and (weakly or properly) efficient and (ii) for each solution in the (weakly or properly) efficient set, there exists at least one corresponding parameter vector for the scalarized problem yielding the same solution. Therefore the set of all parametric solutions (obtained by solving the scalarized problem) is equal to the efficient set. Next we consider an additional objective over the efficient set. Based on the main result, the new objective can instead be considered over the (parametric) solution set of the scalarized problem. For the purpose of constructing numerical methods, we point to existing solution differentiability results for parametric optimal control problems. We propose numerical methods and give an example application to illustrate our approach.
ON GENERAL AUGMENTED LAGRANGIANS AND A MODIFIED SUBGRADIENT ALGORITHM
, 2009
"... In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a sharp Lagrangian in finite dimensional spaces. An inexact versi ..."
Abstract
 Add to MetaCart
In this thesis we study a modified subgradient algorithm applied to the dual problem generated by augmented Lagrangians. We consider an optimization problem with equality constraints and study an exact version of the algorithm with a sharp Lagrangian in finite dimensional spaces. An inexact version of the algorithm is extended to infinite dimensional spaces and we apply it to a dual problem of an extended realvalued optimization problem. The dual problem is constructed via augmented Lagrangians which include sharp Lagrangian as a particular case. The sequences generated by these algorithms converge to a dual solution when the dual optimal solution set is nonempty. They have the property that all accumulation points of a primal sequence, obtained without extra cost, are primal solutions. We relate the convergence properties of these modified subgradient algorithms to differentiability of the dual function at a dual solution, and exact penalty property of these augmented Lagrangians. In the second part of this thesis, we propose and analyze a general augmented Lagrangian function, which includes several augmented Lagrangians considered in the literature. In this more general setting, we study a zero duality gap property, exact penalization and convergence of a suboptimal path related to the dual problem.
Desirability Functions in Multiresponse Optimization
"... Abstract. Desirability functions (DF)s play an increasing role for solving the optimization of process or product quality problems having various quality characteristics to obtain a good compromise between these characteristics. There are many alternative formulations to these functions and solution ..."
Abstract
 Add to MetaCart
Abstract. Desirability functions (DF)s play an increasing role for solving the optimization of process or product quality problems having various quality characteristics to obtain a good compromise between these characteristics. There are many alternative formulations to these functions and solution strategies suggested for handling their weaknesses and improving their strength. Although the DFs of Derringer and Suich are the most popular ones in multipleresponse optimization literature, there is a limited number of solution strategies to their optimization which need to be updated with new research results obtained in the area of nonlinear optimization. In this study, we elaborate the piecewise differentiable structure of the DFs of Derringer and Suich. 1