Results 1  10
of
26
Robust Solutions To Uncertain Semidefinite Programs
 SIAM J. OPTIMIZATION
, 1998
"... In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value ..."
Abstract

Cited by 86 (8 self)
 Add to MetaCart
(Show Context)
In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist as SDPs. When the perturbation is "full," our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique and continuous (Hölderstable) with respect to the unperturbed problem's data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation, and integer programming.
Robust Solutions To Uncertain Semidefinite Programs
, 1998
"... In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist, as SDPs. When the perturbation is "full", our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique, and continuous (Hölderstable) with respect to the unperturbed problems' data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation and integer programming.
Optimization Problems with perturbations, A guided tour
 SIAM REVIEW
, 1996
"... This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and app ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
(Show Context)
This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and approximate solutions in situations where the set of Lagrange multipliers may be unbounded, or even empty. We give rather complete results for nonlinear programming problems, and describe some partial extensions of the method to more general problems. We illustrate the results by computing the equilibrium position of a chain that is almost vertical or horizontal.
Secondorder analysis of optimal control problems with control and initialfinal state constraints
, 2008
"... ..."
Sensitivity Analysis of Optimization Problems Under Second Order Regular Constraints
, 1996
"... We present a perturbation theory for finite dimensional optimization problems subject to abstract constraints satisfying a second order regularity condition. We derive Lipschitz and Holder expansions of approximate optimal solutions, under a directional constraint qualification hypothesis and vari ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
We present a perturbation theory for finite dimensional optimization problems subject to abstract constraints satisfying a second order regularity condition. We derive Lipschitz and Holder expansions of approximate optimal solutions, under a directional constraint qualification hypothesis and various second order sufficient conditions that take into account the curvature of the set defining the constraints of the problem. We also show how the theory applies to semidefinite optimization and, more generally, to semiinfinite programs in which the contact set is a smooth manifold and the quadratic growth condition in the constraint space holds. As a final application we provide a result on differentiability of metric projections in finite dimensional spaces.
On uniqueness of Lagrange multipliers in optimization problems subject to cone constraints
"... In this paper we study uniqueness of Lagrange multipliers in optimization problems subject to cone constraints. The main tool in our investigation of this question will be a calculus of dual (polar) cones. We give sufficient and in some cases necessary conditions for uniqueness of Lagrange multip ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this paper we study uniqueness of Lagrange multipliers in optimization problems subject to cone constraints. The main tool in our investigation of this question will be a calculus of dual (polar) cones. We give sufficient and in some cases necessary conditions for uniqueness of Lagrange multipliers in general Banach spaces. General results are then applied to two particular examples of the semidefinite and semiinfinite programming problems respectively.
Optimality conditions for irregular inequalityconstrained problems
 SIAM J. Control Optim
"... Abstract. We consider feasible sets given by conic constraints, where the cone defining the constraints is convex with nonempty interior. We study the case where the feasible set is not assumed to be regular in the classical sense of Robinson and obtain a constructive description of the tangent cone ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We consider feasible sets given by conic constraints, where the cone defining the constraints is convex with nonempty interior. We study the case where the feasible set is not assumed to be regular in the classical sense of Robinson and obtain a constructive description of the tangent cone under a certain new secondorder regularity condition. This condition contains classical regularity as a special case, while being weaker when constraints are twice differentiable. Assuming that the cone defining the constraints is finitely generated, we also derive a special form of primaldual optimality conditions for the corresponding constrained optimization problem. Our results subsume optimality conditions for both the classical regular and secondorder regular cases, while still being meaningful in the more general setting in the sense that the multiplier associated with the objective function is nonzero.
INEXACT JOSEPHY–NEWTON FRAMEWORK FOR GENERERALIZED EQUATIONS AND ITS APPLICATIONS TO LOCAL ANALYSIS OF NEWTONIAN METHODS FOR CONSTRAINED OPTIMIZATION ∗
, 2008
"... We propose and analyze a perturbed version of the classical JosephyNewton method for solving generalized equations. This perturbed framework is convenient to treat in a unified way standard sequential quadratic programming, its stabilzed version, sequential quadratically constrained quadratic progr ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
We propose and analyze a perturbed version of the classical JosephyNewton method for solving generalized equations. This perturbed framework is convenient to treat in a unified way standard sequential quadratic programming, its stabilzed version, sequential quadratically constrained quadratic programming, and linearly constrained Lagrangian methods. For the linearly constrained Lagrangian methods, in particular, we obtain superlinear convergence under the secondorder sufficient optimality condition and the strict Mangasarian–Fromovitz constraint qualification, while previous results in the literature assume (in addition to secondorder sufficiency) the stronger linear independence constraint qualification as well as the strict complementarity condition. For the sequential quadratically constrained quadratic programming methods, we prove primaldual superlinear/quadratic convergence under the same assumptions as above, which also gives a new result.
Nogap secondorder optimality conditions for optimal control problems with a single state constraint and control
 Math. Program., 117(12, Ser. B):21–50, 2009. hal00697504, version 1  6
, 2012
"... The paper deals with optimal control problems with only one control variable and one state constraint, of arbitrary order. We consider the case of finitely many boundary arcs and touch times. We obtain a nogap theory of secondorder conditions, allowing to characterize secondorder quadratic growth ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
The paper deals with optimal control problems with only one control variable and one state constraint, of arbitrary order. We consider the case of finitely many boundary arcs and touch times. We obtain a nogap theory of secondorder conditions, allowing to characterize secondorder quadratic growth.
Strong Duality and Minimal Representations for Cone Optimization
, 2008
"... The elegant results for strong duality and strict complementarity for linear programming, LP, can fail for cone programming over nonpolyhedral cones. One can have: unattained optimal values; nonzero duality gaps; and no primaldual optimal pair that satisfies strict complementarity. This failure is ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
The elegant results for strong duality and strict complementarity for linear programming, LP, can fail for cone programming over nonpolyhedral cones. One can have: unattained optimal values; nonzero duality gaps; and no primaldual optimal pair that satisfies strict complementarity. This failure is tied to the nonclosure of sums of nonpolyhedral closed cones. We take a fresh look at known and new results for duality, optimality, constraint qualifications, and strict complementarity, for linear cone optimization problems in finite dimensions. These results include: weakest and universal constraint qualifications, CQs; duality and characterizations of optimality that hold without any CQ; geometry of nice and devious cones; the geometric relationships between zero duality gaps, strict complementarity, and the facial structure of cones; and, the connection between theory and empirical evidence for lack of a CQand failure of strict complementarity. One theme is the notion of minimal representation of the cone and the constraints in order to regularize the problem and avoid both the theoretical and numerical difficulties that arise due to (near) loss of a CQ. We include a discussion on obtaining these representations efficiently.