Results 1  10
of
28
Stochastic Approximation Approach to Stochastic Programming
"... In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of th ..."
Abstract

Cited by 99 (14 self)
 Add to MetaCart
In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the Stochastic Approximation (SA) and the Sample Average Approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say linear) structure of the considered problem, while the SA approach is a crude subgradient method which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convexconcave stochastic saddle point problems, and present (in our opinion highly encouraging) results of numerical experiments.
STABILITY OF MULTISTAGE STOCHASTIC PROGRAMS
 SIAM J. OPTIM.
, 2006
"... Quantitative stability of linear multistage stochastic programs is studied. It is shown that the infima of such programs behave (locally) Lipschitz continuous with respect to the sum of an L_rdistance and of a distance measure for the filtrations of the original and approximate stochastic (input) ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
Quantitative stability of linear multistage stochastic programs is studied. It is shown that the infima of such programs behave (locally) Lipschitz continuous with respect to the sum of an L_rdistance and of a distance measure for the filtrations of the original and approximate stochastic (input) processes. Various issues of the result are discussed and an illustrative example is given. Consequences for the reduction of scenario trees are also discussed.
An Approximation Scheme for Stochastic Linear Programming and its Application to Stochastic Integer Programs
, 2004
"... Stochastic optimization problems attempt to model uncertainty in the data by assuming that the input is specified by a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the data one commits on ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
Stochastic optimization problems attempt to model uncertainty in the data by assuming that the input is specified by a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the data one commits on initial actions, and then once the actual data is realized (according to the distribution), further (recourse) actions can be taken. We show that for a broad class of 2stage linear models with recourse, one can, for any ɛ> 0, in time polynomial in 1 ɛ and the size of the input, compute a solution of value within a factor (1 + ɛ) of the optimum, in spite of the fact that exponentially many secondstage scenarios may occur. In conjunction with a suitable rounding scheme, this yields the first approximation algorithms for 2stage stochastic integer optimization problems where the underlying random data is given by a “black box ” and no restrictions are placed on the costs in the two stages. Our rounding approach for stochastic integer programs shows that an approximation algorithm for a deterministic analogue yields, with a small constantfactor loss, provably nearoptimal solutions for the stochastic generalization. Among the range of applications we consider are stochastic versions of the multicommodity flow, set cover, vertex cover, and facility location problems.
Analysis of Stochastic Dual Dynamic Programming Method
"... Abstract. In this paper we discuss statistical properties and rates of convergence of the Stochastic Dual Dynamic Programming (SDDP) method applied to multistage linear stochastic programming problems. We assume that the underline data process is stagewise independent and consider the framework wher ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Abstract. In this paper we discuss statistical properties and rates of convergence of the Stochastic Dual Dynamic Programming (SDDP) method applied to multistage linear stochastic programming problems. We assume that the underline data process is stagewise independent and consider the framework where at first a random sample from the original (true) distribution is generated and consequently the SDDP algorithm is applied to the constructed Sample Average Approximation (SAA) problem.
Approximation algorithms for 2stage stochastic optimization problems
 SIGACT News
, 2006
"... Abstract. Stochastic optimization is a leading approach to model optimization problems in which there is uncertainty in the input data, whether from measurement noise or an inability to know the future. In this survey, we outline some recent progress in the design of polynomialtime algorithms with p ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Abstract. Stochastic optimization is a leading approach to model optimization problems in which there is uncertainty in the input data, whether from measurement noise or an inability to know the future. In this survey, we outline some recent progress in the design of polynomialtime algorithms with performance guarantees on the quality of the solutions found for an important class of stochastic programming problems — 2stage problems with recourse. In particular, we show that for a number of concrete problems, algorithmic approaches that have been applied for their deterministic analogues are also effective in this more challenging domain. More specifically, this work highlights the role of tools from linear programming, rounding techniques, primaldual algorithms, and the role of randomization more generally. 1
Distributionally robust optimization and its tractable approximations
 Operations Research
"... In this paper, we focus on a linear optimization problem with uncertainties, having expectations in the objective and in the set of constraints. We present a modular framework to obtain an approximate solution to the problem that is distributionally robust, and more flexible than the standard techni ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
In this paper, we focus on a linear optimization problem with uncertainties, having expectations in the objective and in the set of constraints. We present a modular framework to obtain an approximate solution to the problem that is distributionally robust, and more flexible than the standard technique of using linear rules. Our framework begins by firstly affinelyextending the set of primitive uncertainties to generate new linear decision rules of larger dimensions, and are therefore more flexible. Next, we develop new piecewiselinear decision rules which allow a more flexible reformulation of the original problem. The reformulated problem will generally contain terms with expectations on the positive parts of the recourse variables. Finally, we convert the uncertain linear program into a deterministic convex program by constructing distributionally robust bounds on these expectations. These bounds are constructed by first using different pieces of information on the distribution of the underlying uncertainties to develop separate bounds, and next integrating them into a combined bound that is better than each of the individual bounds.
Goal Driven Optimization
, 2006
"... Achieving a targeted objective, goal or aspiration level are relevant aspects of decision making under uncertainties. We develop a goal driven stochastic optimization model that takes into account an aspiration level. Our model maximizes the shortfall aspiration level criterion, which encompasses th ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Achieving a targeted objective, goal or aspiration level are relevant aspects of decision making under uncertainties. We develop a goal driven stochastic optimization model that takes into account an aspiration level. Our model maximizes the shortfall aspiration level criterion, which encompasses the probability of success in achieving the goal and an expected level of underperformance or shortfall. The key advantage of the proposed model is its tractability. We show that proposed model is reduced to solving a small collections of stochastic linear optimization problems with objectives evaluated under the popular conditionalvalueatrisk (CVaR) measure. Using techniques in robust optimization, we propose a decision rule based deterministic approximation of the goal driven optimization problem by solving a polynomial number of second order cone optimization problems (SOCP) with respect to the desired accuracy. We compare the numerical performance of the deterministic approximation with sampling approximation and report the computational insights.
A lineardecision based approximation approach to stochastic programming
 Oper. Res
"... Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of ad ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only limited information of the distributions of the underlying uncertainties, such as known mean, support and covariance. One basic idea of our methods is to approximate the recourse decisions via decision rules. We first examine linear decision rules in detail and show that even for problems with complete recourse, linear decision rules can be inadequate and even lead to infeasible instances. Hence, we propose several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable. Specifically, our approximate models are in the forms of the socalled second order cone (SOC) programs, which could be solved efficiently both in theory and in practice. We also present computational evidence indicating that our approach is a viable alternative, and possibly advantageous, to existing stochastic optimization solution techniques in solving a twostage stochastic optimization problem with complete recourse.
Some Large Deviations Results For Latin Hypercube Sampling
"... Large deviations theory is a wellstudied area which has shown to have numerous applications. Broadly speaking, the theory deals with analytical approximations of probabilities of certain types of rare events. Moreover, the theory has recently proven instrumental in the study of complexity of approx ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Large deviations theory is a wellstudied area which has shown to have numerous applications. Broadly speaking, the theory deals with analytical approximations of probabilities of certain types of rare events. Moreover, the theory has recently proven instrumental in the study of complexity of approximations of stochastic optimization problems. The typical results, however, assume that the underlying random variables are either i.i.d. or exhibit some form of Markovian dependence. Our interest in this paper is to study the validity of large deviations results in the context of estimators built with Latin Hypercube sampling, a wellknown sampling technique for variance reduction. We show that a large deviation principle holds for Latin Hypercube sampling for functions in one dimension and for separable multidimensional functions. Moreover, the upper bound of the probability of a large deviation in these cases is no higher under Latin Hypercube sampling than it is under Monte Carlo sampling. We extend the latter property to functions that are monotone in each argument. Numerical experiments illustrate the theoretical results presented in the paper. 1