Results 1  10
of
92
Robust Portfolio Selection Problems
 Mathematics of Operations Research
, 2001
"... In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce &q ..."
Abstract

Cited by 141 (8 self)
 Add to MetaCart
In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce "uncertainty structures" for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as secondorder cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures used to estimate the market parameters. We demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.
Robust optimization  methodology and applications
, 2002
"... Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and s ..."
Abstract

Cited by 115 (4 self)
 Add to MetaCart
Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computationally tractable robust counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making RO a useful tool for realworld applications. We discuss some of these applications, specifically: antenna design, truss topology design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected by small perturbations of the data and that the RO methodology can be successfully used to overcome this phenomenon.
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 84 (11 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
Theory and applications of Robust Optimization
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Abstract

Cited by 66 (14 self)
 Add to MetaCart
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multistage decisionmaking problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to, finance, statistics, learning, and engineering.
Robust game theory
, 2006
"... We present a distributionfree model of incompleteinformation games, both with and without private information, in which the players use a robust optimization approach to contend with payoff uncertainty. Our “robust game” model relaxes the assumptions of Harsanyi’s Bayesian game model, and provides ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
We present a distributionfree model of incompleteinformation games, both with and without private information, in which the players use a robust optimization approach to contend with payoff uncertainty. Our “robust game” model relaxes the assumptions of Harsanyi’s Bayesian game model, and provides an alternative distributionfree equilibrium concept, which we call “robustoptimization equilibrium, ” to that of the ex post equilibrium. We prove that the robustoptimization equilibria of an incompleteinformation game subsume the ex post equilibria of the game and are, unlike the latter, guaranteed to exist when the game is finite and has bounded payoff uncertainty set. For arbitrary robust finite games with bounded polyhedral payoff uncertainty sets, we show that we can compute a robustoptimization equilibrium by methods analogous to those for identifying a Nash equilibrium of a finite game with complete information. In addition, we present computational results.
On tractable approximations of uncertain linear matrix inequalities affected by interval uncertainty
 SIAM Journal on Optimization
, 2002
"... Abstract. We present efficiently verifiable sufficient conditions for the validity of specific NPhard semiinfinite systems of Linear Matrix Inequalities (LMI’s) arising from LMI’s with uncertain data and demonstrate that these conditions are “tight ” up to an absolute constant factor. In particular ..."
Abstract

Cited by 45 (11 self)
 Add to MetaCart
(Show Context)
Abstract. We present efficiently verifiable sufficient conditions for the validity of specific NPhard semiinfinite systems of Linear Matrix Inequalities (LMI’s) arising from LMI’s with uncertain data and demonstrate that these conditions are “tight ” up to an absolute constant factor. In particular, we prove that given an n × n interval matrix Uρ = {A  Aij − A ∗ ij  ≤ ρCij}, one can build a computable lower bound, accurate within the factor π, on the supremum of those ρ for which 2 all instances of Uρ share a common quadratic Lyapunov function. We then obtain a similar result for the problem of Quadratic Lyapunov Stability Synthesis. Finally, we apply our techniques to the problem of maximizing a homogeneous polynomial of degree 3 over the unit cube. Key words. Robust semidefinite optimization, data uncertainty, Lyapunov stability synthesis, relaxations of combinatorial problems AMS subject classifications. 90C05, 90C25, 90C30
TWOSTAGE ROBUST NETWORK FLOW AND DESIGN UNDER DEMAND UNCERTAINTY
 FORTHCOMING IN OPERATIONS RESEARCH
, 2004
"... We describe a twostage robust optimization approach for solving network flow and design problems with uncertain demand. In twostage network optimization one defers a subset of the flow decisions until after the realization of the uncertain demand. Availability of such a recourse action allows one ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
We describe a twostage robust optimization approach for solving network flow and design problems with uncertain demand. In twostage network optimization one defers a subset of the flow decisions until after the realization of the uncertain demand. Availability of such a recourse action allows one to come up with less conservative solutions compared to singlestage optimization. However, this advantage often comes at a price: twostage optimization is, in general, significantly harder than singestage optimization. For network flow and design under demand uncertainty we give a characterization of the firststage robust decisions with an exponential number of constraints and prove that the corresponding separation problem is N Phard even for a network flow problem on a bipartite graph. We show, however, that if the secondstage network topology is totally ordered or an arborescence, then the separation problem is tractable. Unlike singlestage robust optimization under demand uncertainty, twostage robust optimization allows one to control conservatism of the solutions by means of an allowed “budget for demand uncertainty.” Using a budget of uncertainty we provide an upper
An introduction to convex optimization for communications and signal processing
 IEEE J. Sel. Areas Commun
, 2006
"... Abstract—Convex optimization methods are widely used in the ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Convex optimization methods are widely used in the
Ambiguous Chance Constrained Problems And Robust Optimization
 Mathematical Programming
, 2004
"... In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denote ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denotes the Prohorov metric. The ambiguous chance constrained problem is approximated by a robust sampled problem where each constraint is a robust constraint centered at a sample drawn according to the central measure Q 0 . The main contribution of this paper is to show that the robust sampled problem is a good approximation for the ambiguous chance constrained problem with high probability. This result is established using the StrassenDudley Representation Theorem that states that when the distributions of two random variables are close in the Prohorov metric one can construct a coupling of the random variables such that the samples are close with high probability. We also show that the robust sampled problem can be solved e#ciently both in theory and in practice. 1