## An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗ (2008)

Citations: | 1 - 0 self |

### BibTeX

@MISC{Burachik08aninexact,

author = {Regina S. Burachik and C. Yalçın and Kaya Musa Mammadov},

title = {An Inexact Modified Subgradient Algorithm for Nonconvex Optimization ∗},

year = {2008}

}

### OpenURL

### Abstract

We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang–bang control problem, under several different inexactness schemes.

### Citations

3280 | Variational Analysis
- ROCKAFELLAR, WETS
- 1998
(Show Context)
Citation Context ...mmadov 2 A fundamental tool for tackling problem (P) is Lagrangian duality. Under certain classes of augmented Lagrangian schemes, the dual problem is nonsmooth and convex, and zero duality gap holds =-=[20, 6]-=-. In order to solve the dual problem one can typically use nonsmooth convex techniques such as subgradient methods and their extensions. One such extension is the modified subgradient (MSG) algorithm ... |

364 |
Iterative Methods for Linear and Nonlinear Equations
- Kelley
- 1995
(Show Context)
Citation Context ... systems of equations arise from very important applications in many areas (see [25] and the references therein). Iterative methods have been developed for solving nonlinear systems of equations (see =-=[11]-=-). However, the most common and efficient methods for solving these equations involve optimization problems, in particular those in the form of (P) (see [23]). Each serious step in the MSG algorithm r... |

274 |
Minimization Methods for Nondifferentiable Functions, in
- Shor
- 1985
(Show Context)
Citation Context ...− Hk)+θrk (A1) sk ≥ ‖fk‖2 , for some fixed η, θ > 0. (A2) The sequence {sk‖fk‖} is bounded. Assumption (A1) is in the spirit of the classical dynamic step-size rule for subgradient methods (see, e.g. =-=[19, 22, 17, 15]-=-). Assumption (A2) is used in [12, Theorem 4.1] in the context of approximate subgradient methods for coercive problems. Assumption (A2) ensures that the step-size sk remains small enough to guarantee... |

254 | Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions
- Wright
- 1998
(Show Context)
Citation Context ...ces where fminsearch would terminate immediately even for very small values of tolfun, but these instances could rather be related with the convergence properties of the Nelder-Mead simplex algorithm =-=[13, 14, 18]-=-, which forms the basis of fminsearch. In practice both rk and tolfun provide a degree of accuracy of Hk, the optimum value of L(x, uk,ck), in the subproblem. So in our experiments we use tolfun in pl... |

61 | D.: Incremental subgradient methods for nondifferentiable optimization
- Nedič, Bertsekas
- 2001
(Show Context)
Citation Context ... a highly demanding task. In the IMSG algorithm, we consider a dynamic step-size in the spirit of the one introduced by Polyak [19], and further studied, e.g., by Brännlund [3] and Nedić andBertsekas =-=[17]-=-. For a broad choice of the dynamic step-size, we prove dual and primal convergence (see Theorems 4.2 and 4.3). Moreover, we establish equivalence between existence of dual solutions and boundedness o... |

54 | Convergence of the Nelder-Mead simplex method to a nonstationary point
- McKinnon
- 1999
(Show Context)
Citation Context ...ces where fminsearch would terminate immediately even for very small values of tolfun, but these instances could rather be related with the convergence properties of the Nelder-Mead simplex algorithm =-=[13, 14, 18]-=-, which forms the basis of fminsearch. In practice both rk and tolfun provide a degree of accuracy of Hk, the optimum value of L(x, uk,ck), in the subproblem. So in our experiments we use tolfun in pl... |

39 |
Minimization of unsmooth functionals
- Polyak
- 1969
(Show Context)
Citation Context ... the fact that finding a global minimum, even approximately, can still be a highly demanding task. In the IMSG algorithm, we consider a dynamic step-size in the spirit of the one introduced by Polyak =-=[19]-=-, and further studied, e.g., by Brännlund [3] and Nedić andBertsekas [17]. For a broad choice of the dynamic step-size, we prove dual and primal convergence (see Theorems 4.2 and 4.3). Moreover, we es... |

27 | Convergence of approximate and incremental subgradient methods for convex optimization - Kiwiel - 2006 |

21 | Global minimization using an augmented lagrangian method with variable lower-level constraints, MathematicalProgramming,125(2010),pp.139–162
- Birgin, Floudas, et al.
(Show Context)
Citation Context ...Therefore, it is convenient to develop a scheme which accepts approximate solutions of (5). Recently, methods for problem (P) which use approximate solutions of the Lagrangian dual were introduced in =-=[24, 2, 15]-=-. The Lagrangians studied in those papers, however, do not include the sharp augmented Lagrangian. Let r ≥ 0 and define the set Xr(u, c) :={x ∈ X : L(x, (u, c)) ≤ H(u, c)+r}. (6) In other words, x ∈ X... |

17 |
On the choice of step size in subgradient optimization
- Bazaraa, Sherali
- 1981
(Show Context)
Citation Context ...e when H is not known, common practice is to use an estimate ̂ H of H. Typically, one uses an upper bound of H as the estimate ̂H, which can be obtained by evaluating the cost at a feasible point. In =-=[1, 21]-=-, a dynamic approach is taken: ̂ H is updated in each iteration. In our study, we consider a sequence { ̂ Hk} of upper bound estimates of H (see Proposition 5.1). The step-size parameter εk of the IMS... |

14 |
MINOS 5.4 User’s Guide. System Optimization
- Murtagh, Saunders
(Show Context)
Citation Context ...MATLAB running on (single user) Windows XP Professional (Version 5.1) operating system with a 2.00 GHz Intel Pentium M processor with 1GBofRAM. Problem 1 We consider a problem by Murtagh and Saunders =-=[16, 7]-=-, which has also been solved using the MSG algorithm in [4]. min f0(x) =(x1 − 1) 2 +(x1 − x2) 2 +(x2 − x3) 3 +(x3 − x4) 4 +(x4 − x5) 4 subject to f1(x) =x1 + x 2 2 + x3 3 − 3 √ 2 − 2=0 f2(x) =x2 − x 2... |

11 |
and J.L.Noakes: Computational methods for time-optimal switching controls
- Kaya
(Show Context)
Citation Context ...̂rk much more closely. Problem 3 This problem concerns finding a time-optimal concatenation of bang–bang arcs, which takes a control system from an initial state to a terminal state (in minimum time) =-=[10, 4]-=-. min f0(ξ) =ξ1 + ξ2 + ξ3 + ξ4 ξ subject to fi(ξ) =zi(ξ1 + ξ2 + ξ3 + ξ4) =0, i =1, 2 , 4∑ f3(ξ) = min{0,ξk} =0, k=1 where zi(ξ1 +ξ2 +ξ3 +ξ4), i =1, 2, are the solution components of the ordinary diffe... |

11 |
A convergent variant of the Nelder-Mead algorithm
- Price, Coope, et al.
(Show Context)
Citation Context ...ces where fminsearch would terminate immediately even for very small values of tolfun, but these instances could rather be related with the convergence properties of the Nelder-Mead simplex algorithm =-=[13, 14, 18]-=-, which forms the basis of fminsearch. In practice both rk and tolfun provide a degree of accuracy of Hk, the optimum value of L(x, uk,ck), in the subproblem. So in our experiments we use tolfun in pl... |

9 |
Tubcbilek, A variable target value method for nondifferentiable optimization
- Sherali, Choi, et al.
(Show Context)
Citation Context ...e when H is not known, common practice is to use an estimate ̂ H of H. Typically, one uses an upper bound of H as the estimate ̂H, which can be obtained by evaluating the cost at a feasible point. In =-=[1, 21]-=-, a dynamic approach is taken: ̂ H is updated in each iteration. In our study, we consider a sequence { ̂ Hk} of upper bound estimates of H (see Proposition 5.1). The step-size parameter εk of the IMS... |

7 |
2002, Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming
- Gasimov
(Show Context)
Citation Context .... In order to solve the dual problem one can typically use nonsmooth convex techniques such as subgradient methods and their extensions. One such extension is the modified subgradient (MSG) algorithm =-=[8, 9, 4, 5]-=-, which uses the sharp augmented Lagrangian [20]. In [4], it is shown that using the MSG algorithm, dual convergence is achieved, and under some additional conditions, convergence to a primal solution... |

6 |
A generalized subgradient method with relaxation step
- Brännlund
- 1995
(Show Context)
Citation Context ...approximately, can still be a highly demanding task. In the IMSG algorithm, we consider a dynamic step-size in the spirit of the one introduced by Polyak [19], and further studied, e.g., by Brännlund =-=[3]-=- and Nedić andBertsekas [17]. For a broad choice of the dynamic step-size, we prove dual and primal convergence (see Theorems 4.2 and 4.3). Moreover, we establish equivalence between existence of dual... |

3 | On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian
- Burachik, Gasimov, et al.
- 2006
(Show Context)
Citation Context .... In order to solve the dual problem one can typically use nonsmooth convex techniques such as subgradient methods and their extensions. One such extension is the modified subgradient (MSG) algorithm =-=[8, 9, 4, 5]-=-, which uses the sharp augmented Lagrangian [20]. In [4], it is shown that using the MSG algorithm, dual convergence is achieved, and under some additional conditions, convergence to a primal solution... |

3 |
A.: On the absence of duality gap for Lagrange-type functions
- Burachik, Rubinov
- 2005
(Show Context)
Citation Context ...mmadov 2 A fundamental tool for tackling problem (P) is Lagrangian duality. Under certain classes of augmented Lagrangian schemes, the dual problem is nonsmooth and convex, and zero duality gap holds =-=[20, 6]-=-. In order to solve the dual problem one can typically use nonsmooth convex techniques such as subgradient methods and their extensions. One such extension is the modified subgradient (MSG) algorithm ... |

2 |
Nonlinear Lagrange duality theorems and penalty function methods in continuous optimization
- Wang, Yang, et al.
- 2003
(Show Context)
Citation Context ...Therefore, it is convenient to develop a scheme which accepts approximate solutions of (5). Recently, methods for problem (P) which use approximate solutions of the Lagrangian dual were introduced in =-=[24, 2, 15]-=-. The Lagrangians studied in those papers, however, do not include the sharp augmented Lagrangian. Let r ≥ 0 and define the set Xr(u, c) :={x ∈ X : L(x, (u, c)) ≤ H(u, c)+r}. (6) In other words, x ∈ X... |

2 |
Power Generation and Control
- Wood, Woolenberg
- 1996
(Show Context)
Citation Context ...orithm more clearly.An Inexact Modified Subgradient Algorithm by R. S. Burachik, C. Y. Kaya & M. Mammadov 21 Note that, systems of equations arise from very important applications in many areas (see =-=[25]-=- and the references therein). Iterative methods have been developed for solving nonlinear systems of equations (see [11]). However, the most common and efficient methods for solving these equations in... |

1 | An update rule and a convergence result for a penalty function method - Burachik, Kaya - 2007 |

1 |
The modified subgradient method for equality constrained nonconvex optimization problems
- Gasimov, Ismayilova
- 2005
(Show Context)
Citation Context .... In order to solve the dual problem one can typically use nonsmooth convex techniques such as subgradient methods and their extensions. One such extension is the modified subgradient (MSG) algorithm =-=[8, 9, 4, 5]-=-, which uses the sharp augmented Lagrangian [20]. In [4], it is shown that using the MSG algorithm, dual convergence is achieved, and under some additional conditions, convergence to a primal solution... |

1 |
Approximate subgradient methods for nonlinearly constrained network flow problems
- Mijangos
- 2006
(Show Context)
Citation Context ...Therefore, it is convenient to develop a scheme which accepts approximate solutions of (5). Recently, methods for problem (P) which use approximate solutions of the Lagrangian dual were introduced in =-=[24, 2, 15]-=-. The Lagrangians studied in those papers, however, do not include the sharp augmented Lagrangian. Let r ≥ 0 and define the set Xr(u, c) :={x ∈ X : L(x, (u, c)) ≤ H(u, c)+r}. (6) In other words, x ∈ X... |

1 |
The Lagrangian globalization method for nonsmooth constrained equations
- Tong, Qi, et al.
- 2006
(Show Context)
Citation Context ...olving nonlinear systems of equations (see [11]). However, the most common and efficient methods for solving these equations involve optimization problems, in particular those in the form of (P) (see =-=[23]-=-). Each serious step in the MSG algorithm requires a solution of the global optimization problem (5) which is in general quite expensive to obtain (and in some cases unsuccessful). The IMSG algorithm ... |