Results 1  10
of
52
On constraint sampling in the linear programming approach to approximate dynamic programming
 Mathematics of Operations Research
, 2004
"... doi 10.1287/moor.1040.0094 ..."
Convex approximations of chance constrained programs
 SIAM Journal of Optimization
, 2006
"... Abstract. We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its ..."
Abstract

Cited by 72 (7 self)
 Add to MetaCart
Abstract. We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its computationally tractable approximation, i.e., an efficiently solvable deterministic optimization program with the feasible set contained in the chance constrained problem. We construct a general class of such convex conservative approximations of the corresponding chance constrained problem. Moreover, under the assumptions that the constraints are affine in the perturbations and the entries in the perturbation vector are independentofeachother random variables, we build a large deviationtype approximation, referred to as “Bernstein approximation, ” of the chance constrained problem. This approximation is convex and efficiently solvable. We propose a simulationbased scheme for bounding the optimal value in the chance constrained problem and report numerical experiments aimed at comparing the Bernstein and wellknown scenario approximation approaches. Finally, we extend our construction to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set rather than to be known exactly, while the chance constraint should be satisfied for every distribution given by this set.
M.: The scenario approach to robust control design
 IEEE Trans. Autom. Control
, 2006
"... Abstract—This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard con ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
Abstract—This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard control problems representable by means of parameterdependent linear matrix inequalities (LMIs). It is shown in this paper that by appropriate sampling of the constraints one obtains a standard convex optimization problem (the scenario problem) whose solution is approximately feasible for the original (usually infinite) set of constraints, i.e., the measure of the set of original constraints that are violated by the scenario solution rapidly decreases to zero as the number of samples is increased. We provide an explicit and efficient bound on the number of samples required to attain apriori specified levels of probabilistic guarantee of robustness. A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomialtime solution, if robustness is intended in the proposed riskadjusted sense. Index Terms—Probabilistic robustness, randomized algorithms, robust control, robust convex optimization, uncertainty. I.
Ambiguous Chance Constrained Problems And Robust Optimization
 Mathematical Programming
, 2004
"... In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denote ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denotes the Prohorov metric. The ambiguous chance constrained problem is approximated by a robust sampled problem where each constraint is a robust constraint centered at a sample drawn according to the central measure Q 0 . The main contribution of this paper is to show that the robust sampled problem is a good approximation for the ambiguous chance constrained problem with high probability. This result is established using the StrassenDudley Representation Theorem that states that when the distributions of two random variables are close in the Prohorov metric one can construct a coupling of the random variables such that the samples are close with high probability. We also show that the robust sampled problem can be solved e#ciently both in theory and in practice. 1
A Robust Optimization Perspective Of Stochastic Programming
, 2005
"... In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for bounded random variables known as the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of c ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for bounded random variables known as the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. We also propose a tractable robust optimization approach for obtaining robust solutions to a class of stochastic linear optimization problems where the risk of infeasibility can be tolerated as a tradeoff to improve upon the objective value. An attractive feature of the framework is the computational scalability to multiperiod models. We show an application of the framework for solving a project management problem with uncertain activity completion time.
Theory and applications of Robust Optimization
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multistage decisionmaking problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to, finance, statistics, learning, and engineering.
A sample approximation approach for optimization with probabilistic constraints
 IPCO 2007, Lecture Notes in Comput. Sci
, 2007
"... Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larg ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larger than the required risk level will yield a lower bound to the true optimal value with probability approaching one exponentially fast. This leads to an a priori estimate of the sample size required to have high confidence that the sample approximation will yield a lower bound. We then provide conditions under which solving a sample approximation problem with a risk level smaller than the required risk level will yield feasible solutions to the original problem with high probability. Once again, we obtain a priori estimates on the sample size required to obtain high confidence that the sample approximation problem will yield a feasible solution to the original problem. Finally, we present numerical illustrations of how these results can be used to obtain feasible solutions and optimality bounds for optimization problems with probabilistic constraints.
Tetris: A study of randomized constraint sampling
 Probabilistic and Randomized Methods for Design Under Uncertainty
, 1994
"... Randomized constraint sampling has recently been proposed as an approach for approximating solutions to optimization problems when the number of constraints is intractable – say, a googol or even infinity. The idea is to define a probability distribution ψ over the set of constraints and to sample a ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Randomized constraint sampling has recently been proposed as an approach for approximating solutions to optimization problems when the number of constraints is intractable – say, a googol or even infinity. The idea is to define a probability distribution ψ over the set of constraints and to sample a subset
On tractable approximations of randomly perturbed convex constraints
 Proceedings of the 42nd IEEE Conference on Decision and Control Maui
, 2003
"... We consider a chance constraint Prob{ξ: A(x, ξ) ∈ K} ≥ 1 − ɛ (x is the decision vector, ξ is a random perturbation, K is a closed convex cone, and A(·, ·) is bilinear). While important for many applications in Optimization and Control, chance constraints typically are “computationally intractable”, ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
We consider a chance constraint Prob{ξ: A(x, ξ) ∈ K} ≥ 1 − ɛ (x is the decision vector, ξ is a random perturbation, K is a closed convex cone, and A(·, ·) is bilinear). While important for many applications in Optimization and Control, chance constraints typically are “computationally intractable”, which makes it necessary to look for their tractable approximations. We present these approximations for the cases when the underlying conic constraint A(x, ξ) ∈ K is (a) scalar inequality, or (b) conic quadratic inequality, or (c) linear matrix inequality, and discuss the level of conservativeness of the approximations. 1 The problem Consider a randomly perturbed convex constraint in the conic form: k� Aξ,σ(x) = A0(x) + σ ξiAi(x) ∈ K, (1) where i=1 • Ai(·) are affine mappings from R n to finitedimensional real vector space E, and x ∈ R n is the decision vector; • ξi are scalar random perturbations satisfying the relations (a) : ξi are mutually independent; (b) : E {ξi} = 0; (c) : E � exp{ξ 2 i /4} � ≤ √ 2 (2) Cases of primary interest: — ξi ∼ N (0, 1) (“Gaussian noise”; the absolute constants in (2.c) come exactly from the desire to make the relation valid for the standard Gaussian perturbations); — E{ξi} = 0, ξi  ≤ 1 (“bounded random noise”). • σ ≥ 0 is the level of perturbations, • K is a closed pointed convex cone in E. Cases of primary interest: — E = R, K = R+; here (1) is a scalar linear inequality;
An Integer Programming Approach for Linear Programs with Probabilistic Constraints ∗
, 2008
"... Linear programs with joint probabilistic constraints (PCLP) are difficult to solve because the feasible region is not convex. We consider a special case of PCLP in which only the righthand side is random and this random vector has a finite distribution. We give a mixedinteger programming formulati ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Linear programs with joint probabilistic constraints (PCLP) are difficult to solve because the feasible region is not convex. We consider a special case of PCLP in which only the righthand side is random and this random vector has a finite distribution. We give a mixedinteger programming formulation for this special case and study the relaxation corresponding to a single row of the probabilistic constraint. We obtain two strengthened formulations. As a byproduct of this analysis, we obtain new results for the previously studied mixing set, subject to an additional knapsack inequality. We present computational results which indicate that by using our strengthened formulations, instances that are considerably larger than have been considered before can be solved to optimality.