Results 1  10
of
315
Robust solutions to uncertain linear programs
 OR Letters
, 1999
"... We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (”nonadjustable variables”), while the other part are variables that can be chosen after the realization ..."
Abstract

Cited by 358 (15 self)
 Add to MetaCart
(Show Context)
We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (”nonadjustable variables”), while the other part are variables that can be chosen after the realization (”adjustable variables”). We extend the Robust Optimization methodology ([1, 4, 5, 6, 7, 9, 13, 14]) to this situation by introducing the Adjustable Robust Counterpart (ARC) associated with an LP of the above structure. Often the ARC is significantly less conservative than the usual Robust Counterpart (RC), however, in most cases the ARC is computationally intractable (NPhard). This difficulty is addressed by restricting the adjustable variables to be affine functions of the uncertain data. The ensuing Affinely Adjustable Robust Counterpart (AARC) problem is then shown to be, in certain important cases, equivalent to a tractable optimization problem (typically an LP or a Semidefinite problem), and in other cases, having a tight approximation which is tractable. The AARC approach is illustrated by applying it to a multistage inventory management problem.
Conditional valueatrisk for general loss distributions
 Journal of Banking and Finance
, 2002
"... Abstract. Fundamental properties of conditional valueatrisk, as a measure of risk with significant advantages over valueatrisk, are derived for loss distributions in finance that can involve discreetness. Such distributions are of particular importance in applications because of the prevalence o ..."
Abstract

Cited by 356 (28 self)
 Add to MetaCart
(Show Context)
Abstract. Fundamental properties of conditional valueatrisk, as a measure of risk with significant advantages over valueatrisk, are derived for loss distributions in finance that can involve discreetness. Such distributions are of particular importance in applications because of the prevalence of models based on scenarios and finite sampling. Conditional valueatrisk is able to quantify dangers beyond valueatrisk, and moreover it is coherent. It provides optimization shortcuts which, through linear programming techniques, make practical many largescale calculations that could otherwise be out of reach. The numerical efficiency and stability of such calculations, shown in several case studies, are illustrated further with an example of index tracking. Key Words: Valueatrisk, conditional valueatrisk, mean shortfall, coherent risk measures, risk sampling, scenarios, hedging, index tracking, portfolio optimization, risk management
The scenario approach to robust control design
 IEEE TRANS. AUTOM. CONTROL
, 2006
"... This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard control prob ..."
Abstract

Cited by 114 (9 self)
 Add to MetaCart
This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard control problems representable by means of parameterdependent linear matrix inequalities (LMIs). It is shown in this paper that by appropriate sampling of the constraints one obtains a standard convex optimization problem (the scenario problem) whose solution is approximately feasible for the original (usually infinite) set of constraints, i.e., the measure of the set of original constraints that are violated by the scenario solution rapidly decreases to zero as the number of samples is increased. We provide an explicit and efficient bound on the number of samples required to attain apriori specified levels of probabilistic guarantee of robustness. A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomialtime solution, if robustness is intended in the proposed riskadjusted sense.
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 110 (12 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
Theory and applications of Robust Optimization
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Abstract

Cited by 100 (14 self)
 Add to MetaCart
(Show Context)
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multistage decisionmaking problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to, finance, statistics, learning, and engineering.
Portfolio Optimization with Conditional ValueatRisk objective and Constraints
, 1999
"... ..."
(Show Context)
Optimization under uncertainty: Stateoftheart and opportunities
 Computers and Chemical Engineering
, 2004
"... A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Uncertainty, for instance, governs the prices of fuels, the availability of electricity, and the demand for chemi ..."
Abstract

Cited by 86 (0 self)
 Add to MetaCart
(Show Context)
A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Uncertainty, for instance, governs the prices of fuels, the availability of electricity, and the demand for chemicals. A key difficulty in optimization under uncertainty is in dealing with an uncertainty space that is huge and frequently leads to very largescale optimization models. Decisionmaking under uncertainty is often further complicated by the presence of integer decision variables to model logical and other discrete decisions in a multiperiod or multistage setting. This paper reviews theory and methodology that have been developed to cope with the complexity of optimization problems under uncertainty. We discuss and contrast the classical recoursebased stochastic programming, robust stochastic programming, probabilistic (chanceconstraint) programming, fuzzy programming, and stochastic dynamic programming. The advantages and shortcomings of these models are reviewed and illustrated through examples. Applications and the stateoftheart in computations are also reviewed. Finally, we discuss several main areas for future development in this field. These include development of polynomialtime approximation schemes for multistage stochastic programs and the application of global optimization algorithms to twostage and chanceconstraint formulations.
Energy efficient building climate control using stochastic model predictive control and weather predictions
 in Proc. ACC’10
, 2010
"... Abstract—One of the most critical challenges facing society today is climate change and thus the need to realize massive energy savings. Since buildings account for about 40 % of global final energy use, energy efficient building climate control can have an important contribution. In this paper we d ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
(Show Context)
Abstract—One of the most critical challenges facing society today is climate change and thus the need to realize massive energy savings. Since buildings account for about 40 % of global final energy use, energy efficient building climate control can have an important contribution. In this paper we develop and analyze a Stochastic Model Predictive Control (SMPC) strategy for building climate control that takes into account weather predictions to increase energy efficiency while respecting constraints resulting from desired occupant comfort. We investigate a bilinear model under stochastic uncertainty with probabilistic, time varying constraints. We report on the assessment of this control strategy in a largescale simulation study where the control performance with different building variants and under different weather conditions is studied. For selected cases the SMPC approach is analyzed in detail and shown to significantly outperform current control practice.
Computational complexity of stochastic programming problems
, 2005
"... Stochastic programming is the subfield of mathematical programming that considers optimization in the presence of uncertainty. During the last four decades a vast quantity of literature on the subject has appeared. Developments in the theory of computational complexity allow us to establish the theo ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
(Show Context)
Stochastic programming is the subfield of mathematical programming that considers optimization in the presence of uncertainty. During the last four decades a vast quantity of literature on the subject has appeared. Developments in the theory of computational complexity allow us to establish the theoretical complexity of a variety of stochastic programming problems studied in this literature. Under the assumption that the stochastic parameters are independently distributed, we show that twostage stochastic programming problems are ♯Phard. Under the same assumption we show that certain multistage stochastic programming problems are PSPACEhard. The problems we consider are nonstandard in that distributions of stochastic parameters in later stages depend on decisions made in earlier stages.
An Efficient Algorithm for Statistical Minimization of Total Power under Timing Yield Constraints
 In DAC
, 2005
"... Power minimization under variability is formulated as a rigorous statistical robust optimization program with a guarantee of power and timing yields. Both power and timing metrics are treated probabilistically. Power reduction is performed by simultaneous sizing and dual threshold voltage assignment ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
(Show Context)
Power minimization under variability is formulated as a rigorous statistical robust optimization program with a guarantee of power and timing yields. Both power and timing metrics are treated probabilistically. Power reduction is performed by simultaneous sizing and dual threshold voltage assignment. An extremely fast runtime is achieved by casting the problem as a secondorder conic problem and solving it using efficient interiorpoint optimization methods. When compared to the deterministic optimization, the new algorithm, on average, reduces static power by 31 % and total power by 17 % without the loss of parametric yield. The run time on a variety of public and industrial benchmarks is 30X faster than other known statistical power minimization algorithms.