Results 1  10
of
69
On constraint sampling in the linear programming approach to approximate dynamic programming
 Mathematics of Operations Research
, 2004
"... doi 10.1287/moor.1040.0094 ..."
Convex approximations of chance constrained programs
 SIAM Journal of Optimization
, 2006
"... Abstract. We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its ..."
Abstract

Cited by 75 (8 self)
 Add to MetaCart
Abstract. We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its computationally tractable approximation, i.e., an efficiently solvable deterministic optimization program with the feasible set contained in the chance constrained problem. We construct a general class of such convex conservative approximations of the corresponding chance constrained problem. Moreover, under the assumptions that the constraints are affine in the perturbations and the entries in the perturbation vector are independentofeachother random variables, we build a large deviationtype approximation, referred to as “Bernstein approximation, ” of the chance constrained problem. This approximation is convex and efficiently solvable. We propose a simulationbased scheme for bounding the optimal value in the chance constrained problem and report numerical experiments aimed at comparing the Bernstein and wellknown scenario approximation approaches. Finally, we extend our construction to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set rather than to be known exactly, while the chance constraint should be satisfied for every distribution given by this set.
Ambiguous Chance Constrained Problems And Robust Optimization
 Mathematical Programming
, 2004
"... In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denote ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denotes the Prohorov metric. The ambiguous chance constrained problem is approximated by a robust sampled problem where each constraint is a robust constraint centered at a sample drawn according to the central measure Q 0 . The main contribution of this paper is to show that the robust sampled problem is a good approximation for the ambiguous chance constrained problem with high probability. This result is established using the StrassenDudley Representation Theorem that states that when the distributions of two random variables are close in the Prohorov metric one can construct a coupling of the random variables such that the samples are close with high probability. We also show that the robust sampled problem can be solved e#ciently both in theory and in practice. 1
A Robust Optimization Perspective Of Stochastic Programming
, 2005
"... In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for bounded random variables known as the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of c ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for bounded random variables known as the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. We also propose a tractable robust optimization approach for obtaining robust solutions to a class of stochastic linear optimization problems where the risk of infeasibility can be tolerated as a tradeoff to improve upon the objective value. An attractive feature of the framework is the computational scalability to multiperiod models. We show an application of the framework for solving a project management problem with uncertain activity completion time.
Theory and applications of Robust Optimization
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multistage decisionmaking problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to, finance, statistics, learning, and engineering.
Distributionally Robust Optimization under Moment Uncertainty with Application to DataDriven Problems
"... Stochastic programs can effectively describe the decisionmaking problem in an uncertain environment. Unfortunately, such programs are often computationally demanding to solve. In addition, their solutions can be misleading when there is ambiguity in the choice of a distribution for the random param ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Stochastic programs can effectively describe the decisionmaking problem in an uncertain environment. Unfortunately, such programs are often computationally demanding to solve. In addition, their solutions can be misleading when there is ambiguity in the choice of a distribution for the random parameters. In this paper, we propose a model describing one’s uncertainty in both the distribution’s form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance). We demonstrate that for a wide range of cost functions the associated distributionally robust stochastic program can be solved efficiently. Furthermore, by deriving new confidence regions for the mean and covariance of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical data. This is confirmed in a practical example of portfolio selection, where our framework leads to better performing policies on the “true” distribution underlying the daily return of assets.
A sample approximation approach for optimization with probabilistic constraints
 IPCO 2007, Lecture Notes in Comput. Sci
, 2007
"... Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larg ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larger than the required risk level will yield a lower bound to the true optimal value with probability approaching one exponentially fast. This leads to an a priori estimate of the sample size required to have high confidence that the sample approximation will yield a lower bound. We then provide conditions under which solving a sample approximation problem with a risk level smaller than the required risk level will yield feasible solutions to the original problem with high probability. Once again, we obtain a priori estimates on the sample size required to obtain high confidence that the sample approximation problem will yield a feasible solution to the original problem. Finally, we present numerical illustrations of how these results can be used to obtain feasible solutions and optimality bounds for optimization problems with probabilistic constraints.
Tetris: A study of randomized constraint sampling
 Probabilistic and Randomized Methods for Design Under Uncertainty
, 1994
"... Randomized constraint sampling has recently been proposed as an approach for approximating solutions to optimization problems when the number of constraints is intractable – say, a googol or even infinity. The idea is to define a probability distribution ψ over the set of constraints and to sample a ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Randomized constraint sampling has recently been proposed as an approach for approximating solutions to optimization problems when the number of constraints is intractable – say, a googol or even infinity. The idea is to define a probability distribution ψ over the set of constraints and to sample a subset
Semidefinite relaxation of quadratic optimization problems
 Signal Processing Magazine, IEEE
, 2010
"... n recent years, the semidefinite relaxation (SDR) technique has been at the center of some of very exciting developments in the area of signal processing and communications, and it has shown great significance and relevance on a variety of applications. Roughly speaking, SDR is a powerful, computa ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
n recent years, the semidefinite relaxation (SDR) technique has been at the center of some of very exciting developments in the area of signal processing and communications, and it has shown great significance and relevance on a variety of applications. Roughly speaking, SDR is a powerful, computationally efficient approximation technique for a host of very difficult optimization problems. In particular, it can be applied to many nonconvex quadratically constrained quadratic programs (QCQPs) in an almost mechanical fashion, including the following problem: min x[Rn x T