Results 1 
6 of
6
Robust Solutions To Uncertain Semidefinite Programs
 SIAM J. OPTIMIZATION
, 1998
"... In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value of paramet ..."
Abstract

Cited by 82 (8 self)
 Add to MetaCart
In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist as SDPs. When the perturbation is "full," our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique and continuous (Hölderstable) with respect to the unperturbed problem's data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation, and integer programming.
Optimization Problems with perturbations, A guided tour
 SIAM REVIEW
, 1996
"... This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and app ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and approximate solutions in situations where the set of Lagrange multipliers may be unbounded, or even empty. We give rather complete results for nonlinear programming problems, and describe some partial extensions of the method to more general problems. We illustrate the results by computing the equilibrium position of a chain that is almost vertical or horizontal.
Metric Regularity and Quantitative Stability in Stochastic Programs With Probabilistic Constraints
"... Introducing probabilistic constraints leads in general to nonconvex, nonsmooth or even discontinuous optimization models. In this paper, necessary and sufficient conditions for metric regularity of (several joint) probabilistic constraints are derived using recent results from nonsmooth analysis. Th ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
Introducing probabilistic constraints leads in general to nonconvex, nonsmooth or even discontinuous optimization models. In this paper, necessary and sufficient conditions for metric regularity of (several joint) probabilistic constraints are derived using recent results from nonsmooth analysis. The conditions apply to fairly general constraints and extend earlier work in this direction. Further, a verifiable sufficient condition for quadratic growth of the objective function in a more specific convex stochastic program is indicated and applied in order to obtain a new result on quantitative stability of solution sets when the underlying probability distribution is subjected to perturbations. This is used to derive bounds for the deviation of solution sets when the probability measure is replaced by empirical estimates. Keywords: stochastic programming, probabilistic constraints, metric regularity, nonsmooth analysis, quadratic growth, quantitative stability, empirical approximation ...
Quadratic Growth and Stability in Convex Programming Problems With Multiple Solutions
, 1995
"... Given a convex program with C² functions and a convex set S of solutions to the problem, we give a second order condition which guarantees that the problem does not have solutions outside of S. This condition is interpreted as a characterization for the quadratic growth of the cost function. The cr ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Given a convex program with C² functions and a convex set S of solutions to the problem, we give a second order condition which guarantees that the problem does not have solutions outside of S. This condition is interpreted as a characterization for the quadratic growth of the cost function. The crucial role in the proofs is played by a theorem describing a certain uniform regularity property of critical cones in smooth convex programs. We apply these results to the discussion of stability of solutions of a convex program under possibly nonconvex perturbations.
On uniqueness of Lagrange multipliers in optimization problems subject to cone constraints
"... In this paper we study uniqueness of Lagrange multipliers in optimization problems subject to cone constraints. The main tool in our investigation of this question will be a calculus of dual (polar) cones. We give sufficient and in some cases necessary conditions for uniqueness of Lagrange multip ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In this paper we study uniqueness of Lagrange multipliers in optimization problems subject to cone constraints. The main tool in our investigation of this question will be a calculus of dual (polar) cones. We give sufficient and in some cases necessary conditions for uniqueness of Lagrange multipliers in general Banach spaces. General results are then applied to two particular examples of the semidefinite and semiinfinite programming problems respectively.
Quantitative stability in stochastic programming Alexander
, 1994
"... In this paper we study stability of optimal solutions of stochastic programming problems with fixed recourse. An upper bound for the rate of convergence is given in terms of the objective functions of the associated deterministic problems. As an example it is shown how it can be applied to derivatio ..."
Abstract
 Add to MetaCart
In this paper we study stability of optimal solutions of stochastic programming problems with fixed recourse. An upper bound for the rate of convergence is given in terms of the objective functions of the associated deterministic problems. As an example it is shown how it can be applied to derivation of the Law of Iterated Logarithm for the optimal solutions. It is also shown that in the case of simple recourse this upper bound implies upper Lipschitz continuity of the optimal solutions with respect to the KolmogorovSmimov distance between the corresponding cumulative probability distribution funcfions.