Results 1  10
of
491
Solution sensitivity for Karush–Kuhn–Tucker systems with nonunique Lagrange multipliers
 Optimization
"... Abstract. We consider KarushKuhnTucker (KKT) systems, which depend on a parameter. Our contribution concerns with the existence of solution of the directionally perturbed KKT system, approximating the given primaldual base solution. To our knowledge, we give the first explicit result of this kind ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
in the situation where the multiplier associated with the base primal solution may not be unique. The condition we employ can be interpreted as the 2regularity property of a smooth reformulation of the KKT system. We also give a strictly sharper, compared to other statements in the literature, estimate
On the Role of the MangasarianFromovitz Constraint Qualification for Penalty, Exact Penalty and Lagrange Multiplier Methods
, 1997
"... In this paper we consider three embeddings (oneparametric optimization problems) motivated by penalty, exact penalty and Lagrange multiplier methods. We give an answer to the question under which conditions these methods are successful with an arbitrarily chosen starting point. Using the theory of ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper we consider three embeddings (oneparametric optimization problems) motivated by penalty, exact penalty and Lagrange multiplier methods. We give an answer to the question under which conditions these methods are successful with an arbitrarily chosen starting point. Using the theory
On the Role of the MangasarianFromovitz Constraint Qualication for Penalty Exact Penalty and Lagrange Multiplier Methods
"... In this paper we consider three embeddings oneparametric optimization prob lems motivated by penalty exact penalty and Lagrange multiplier methods We give an answer to the question under which conditions these methods are success ful with an arbitrarily chosen starting point Using the theory of ..."
Abstract
 Add to MetaCart
In this paper we consider three embeddings oneparametric optimization prob lems motivated by penalty exact penalty and Lagrange multiplier methods We give an answer to the question under which conditions these methods are success ful with an arbitrarily chosen starting point Using the theory
On attraction of Newtontype iterates to multipliers violating secondorder sufficiency conditions
, 2009
"... Assuming that the primal part of the sequence generated by a Newtontype (e.g., SQP) method applied to an equalityconstrained problem converges to a solution where the constraints are degenerate, we investigate whether the dual part of the sequence is attracted by those Lagrange multipliers which s ..."
Abstract

Cited by 17 (15 self)
 Add to MetaCart
Assuming that the primal part of the sequence generated by a Newtontype (e.g., SQP) method applied to an equalityconstrained problem converges to a solution where the constraints are degenerate, we investigate whether the dual part of the sequence is attracted by those Lagrange multipliers which
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 125 (5 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Multiplier convergence in trustregion methods with application to convergence of decomposition methods for MPECs
 Math. Program
"... Abstract. We study piecewise decomposition methods for mathematical programs with equilibrium constraints (MPECs) for which all constraint functions are linear. At each iteration of a decomposition method, one step of a nonlinear programming scheme is applied to one piece of the MPEC to obtain the n ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
the next iterate. Our goal is to understand global convergence to Bstationary points of these methods when the embedded nonlinear programming solver is a trustregion scheme, and the selection of pieces is determined using multipliers generated by solving the trustregion subproblem. To this end we study
Modified Wilson's Method For Nonlinear Programs With Nonunique Multipliers
, 1999
"... this paper we deal with arbitrary nonlinear constraint functions. We first present a general framework for obtaining superlinear convergence of Newtontype methods for generalized equations with compact solution sets. Then our main aim is to show how this framework can be applied to the KarushKuhnT ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
KuhnTucker system and to derive conditions that imply local qquadratic convergence of a Modified Wilson Method but not the uniqueness of the multiplier vector. This rate of convergence will be shown for the distances of the iterates to the set of KKT points. Josephy [8] proved that Newton's method
Nondifferentiable multiplier rules for optimization and bilevel optimization problems
 SIAM J. Optim
, 2004
"... Abstract. In this paper we study optimization problems with equality and inequality constraints on a Banach space where the objective function and the binding constraints are either differentiable at the optimal solution or Lipschitz near the optimal solution. Necessary and sufficient optimality con ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Abstract. In this paper we study optimization problems with equality and inequality constraints on a Banach space where the objective function and the binding constraints are either differentiable at the optimal solution or Lipschitz near the optimal solution. Necessary and sufficient optimality conditions and constraint qualifications in terms of the Michel–Penot subdifferential are given, and the results are applied to bilevel optimization problems.
An Algorithm for MultiParametric Quadratic Programming and Explicit MPC Solutions
, 2001
"... Explicit solutions to constrained linear MPC problems can be obtained by solving multiparametric quadratic programs (mpQP) where the parameters are the components of the state vector. We study the properties of the polyhedral partition of the statespace induced by the multiparametric piecewise li ..."
Abstract

Cited by 91 (21 self)
 Add to MetaCart
Explicit solutions to constrained linear MPC problems can be obtained by solving multiparametric quadratic programs (mpQP) where the parameters are the components of the state vector. We study the properties of the polyhedral partition of the statespace induced by the multiparametric piecewise linear solution and propose a new mpQP solver. Compared to existing algorithms, our approach adopts a different exploration strategy for subdividing the parameter space, avoiding unnecessary partitioning and QP problem solving, with a significant improvement of efficiency.
On The Accurate Identification Of Active Constraints
, 1996
"... We consider nonlinear programs with inequality constraints, and we focus on the problem of identifying those constraints which will be active at an isolated local solution. The correct identification of active constraints is important from both a theoretical and a practical point of view. Such an id ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
neither complementary slackness nor uniqueness of the multipliers. As an example of application of the new technique we present a local active set Newtontype algorithm for the solution of general inequality constrained problems for which Qquadratic convergence of the primal variables can be proved under
Results 1  10
of
491