## An Update Rule and a Convergence Result for a Penalty Function Method (2007)

### BibTeX

@MISC{Professor07anupdate,

author = {Dedicated Professor and Alexander M. Rubinov},

title = {An Update Rule and a Convergence Result for a Penalty Function Method},

year = {2007}

}

### OpenURL

### Abstract

We use a primal-dual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results. Key words: Penalty function method, penalty parameter update, least exact penalty parameter, duality, nonsmooth optimization, nonconvex optimization. Mathematical Subject Classification: 49M30; 49M29; 49M37; 90C26; 90C30. 1

### Citations

3257 | Convex Analysis
- Rockafellar
- 1970
(Show Context)
Citation Context ...lds. By Theorem 3(a) we have that H(d) =M for every d>dmin. So H is constant on (dmin, ∞) and hence ∂H(d) ={0} for every d>dmin. Sincethegraphof the multifunction ∂H(·) is closed (see Theorem 24.4 in =-=[14]-=-), we must have 0 ∈ ∂H(dmin). Now using (a), we get 0=∇H(dmin) =∂H(dmin). (21) On the other hand, by (13), f +(˜x) ∈ ∂H(dmin) for every ˜x ∈ X(dmin). Combining this with (21) we get f +(˜x) = 0 for ev... |

154 |
Text Examples for Nonlinear Programming Codes
- Hock, Schittkowski
- 1981
(Show Context)
Citation Context ...nce on the function value (TolFun) and the termination tolerance on the optimization variable (TolX) for fminsearch have been chosen as 10 −10 . Problem 1 Consider the test problem 62 (GLR-P1-1) from =-=[11]-=-. ⎧ [ ( ) x1 + x2 + x3 + .03 min f0(x) =−32.174 255 ln 0.09x1 + x2 + x3 + .03 ( ) x2 + x3 + .03 ⎪⎨ + 280 ln 0.07x2 + x3 + .03 ( )] x3 + .03 + 290 ln 0.13x3 + .03 ⎪⎩ subject to x1 + x2 + x3 =1 0 ≤ xi ≤... |

26 | An exact penalization viewpoint of constrained optimization - Burke - 1991 |

17 |
On the choice of step size in subgradient optimization
- Bazaraa, Sherali
- 1981
(Show Context)
Citation Context ... minimize |x| over all x in [−1, 1] satisfying f + (x) =max{0,x} =0, so S(P )={0} with M = 0. Moreover, it can be easily checked that { d +1 d ≤−1, H(d) = 0 otherwise. Therefore, ⎧ ⎨ {1} d<−1, X(d) = =-=[0, 1]-=- d = −1, ⎩ {0} d>−1. One has dmin = −1 andclearlyX(dmin) � S(P ). The above example suggests that X(dmin) � S(P ) whenever H is not differentiable at dmin. Thisfactisprovednext. Theorem 4 Let H be as ... |

13 | The theory of max-min, with applications - Danskin - 1966 |

12 |
Nonlinear rescaling and proximal-like methods in convex programming
- Polyak, Teboulle
- 1997
(Show Context)
Citation Context ...uce numerical ill-conditioning because of too large a value of an exact penalty parameter reached, which results in inaccuracies in the solution. In order to avoid ill-conditioning, it is proposed in =-=[12, 13, 10]-=- a dynamic update of the penalty parameter, based on dual information. These works analyse the case of convex and smooth problems. This poses the natural question of whether a penalty update can be de... |

12 | Lagrange-Type Functions in Constrained Non-Convex Optimization - AM, Yang - 2003 |

6 |
Exact Penalty Methods, in: Algorithms for Continuous Optimization: the stateof-the-art
- Pillo
- 1994
(Show Context)
Citation Context ...olves a constrained optimization problem by transforming it into a sequence of unconstrained ones. A detailed survey of penalty methods and their applications to nonlinear programming can be found in =-=[2, 6, 4]-=- and the references therein. In these methods, the original constrained problem is replaced by an unconstrained problem, whose objective function is the sum of a certain “merit” function (which reflec... |

6 |
Nonlinear rescaling vs. smoothing technique in convex optimization
- Polyak
- 2002
(Show Context)
Citation Context ...uce numerical ill-conditioning because of too large a value of an exact penalty parameter reached, which results in inaccuracies in the solution. In order to avoid ill-conditioning, it is proposed in =-=[12, 13, 10]-=- a dynamic update of the penalty parameter, based on dual information. These works analyse the case of convex and smooth problems. This poses the natural question of whether a penalty update can be de... |

5 |
Survey of penalty, exact-penalty and multiplier methods from
- Boukary, Fiacco
- 1968
(Show Context)
Citation Context ...olves a constrained optimization problem by transforming it into a sequence of unconstrained ones. A detailed survey of penalty methods and their applications to nonlinear programming can be found in =-=[2, 6, 4]-=- and the references therein. In these methods, the original constrained problem is replaced by an unconstrained problem, whose objective function is the sum of a certain “merit” function (which reflec... |

3 | On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian
- Burachik, Gasimov, et al.
- 2006
(Show Context)
Citation Context ...bgradient (MSG) algorithm, which was recently introduced in [8, 9] for tackling nonsmooth and nonconvex optimization problems subject to equality constraints. The MSG algorithm was further studied in =-=[3]-=-, where convergence of the dual variables to a dual solution was proved. We use the updates of the MSG algorithm for deriving a simple update formula for the penalty parameter. Another aim of this art... |

3 |
Primal-dual nonlinear rescaling method with dynamic scaling parameter update
- Griva, Polyak
- 2006
(Show Context)
Citation Context ...uce numerical ill-conditioning because of too large a value of an exact penalty parameter reached, which results in inaccuracies in the solution. In order to avoid ill-conditioning, it is proposed in =-=[12, 13, 10]-=- a dynamic update of the penalty parameter, based on dual information. These works analyse the case of convex and smooth problems. This poses the natural question of whether a penalty update can be de... |

2 |
2003, Lagrange-type functions in constrained optimization
- Rubinov, Yang, et al.
(Show Context)
Citation Context ...h problems. This poses the natural question of whether a penalty update can be designed, such that it uses dual information successfully, in the absence of convexity and/or smoothness assumptions. In =-=[17, 18]-=-, Rubinov and his co-workers proposed a new kind of (generalized) penalty function for nonsmooth and nonconvex problems. Their scheme possesses an exact penalty parameter which turns out to be relativ... |

2 |
2000, A variable target value method for nondifferentiable optimization
- Sherali, Choi, et al.
(Show Context)
Citation Context ...problem [7]. Choosing the unknown optimal value H in (28) has been an issue in subgradient methods. In [1], H is chosen as a convex combination of a fixed upper bound and the current best dual value; =-=[19]-=- proposes the so-called variable target value method which assumes no a priori knowledge regarding H. As in [3], we will use an upper bound estimate, denoted by ̂ H,for H. In many problems, ̂ H can be... |

1 |
Handbook of Updates and a Convergence Result by R
- Floudas, Pardalos, et al.
- 1999
(Show Context)
Citation Context ...es the knowledge of the optimal value H, which is at hand only in some special cases, for example when the problem of solving a nonlinear system of equations is reformulated as a minimization problem =-=[7]-=-. Choosing the unknown optimal value H in (28) has been an issue in subgradient methods. In [1], H is chosen as a convex combination of a fixed upper bound and the current best dual value; [19] propos... |

1 |
Gasimov (2002). Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming
- N
(Show Context)
Citation Context ... Lagrangian. This duality scheme has zero duality gap thanks to [15, Theorem 11.59]. Then we use a primal-dual scheme called the Modified Subgradient (MSG) algorithm, which was recently introduced in =-=[8, 9]-=- for tackling nonsmooth and nonconvex optimization problems subject to equality constraints. The MSG algorithm was further studied in [3], where convergence of the dual variables to a dual solution wa... |

1 |
Ismayilova (2005). The modified subgradient method for equality constrained nonconvex optimization problems
- Gasimov, A
(Show Context)
Citation Context ... Lagrangian. This duality scheme has zero duality gap thanks to [15, Theorem 11.59]. Then we use a primal-dual scheme called the Modified Subgradient (MSG) algorithm, which was recently introduced in =-=[8, 9]-=- for tackling nonsmooth and nonconvex optimization problems subject to equality constraints. The MSG algorithm was further studied in [3], where convergence of the dual variables to a dual solution wa... |

1 |
Bagirov (2003). Penalty functions with a small penalty parameter: numerical experiments
- Rubinov, Yang, et al.
(Show Context)
Citation Context ...h problems. This poses the natural question of whether a penalty update can be designed, such that it uses dual information successfully, in the absence of convexity and/or smoothness assumptions. In =-=[17, 18]-=-, Rubinov and his co-workers proposed a new kind of (generalized) penalty function for nonsmooth and nonconvex problems. Their scheme possesses an exact penalty parameter which turns out to be relativ... |