Results 1  10
of
265
Robust solutions to uncertain linear programs
 OR Letters
, 1999
"... We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (”nonadjustable variables”), while the other part are variables that can be chosen after the realization ..."
Abstract

Cited by 231 (14 self)
 Add to MetaCart
We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (”nonadjustable variables”), while the other part are variables that can be chosen after the realization (”adjustable variables”). We extend the Robust Optimization methodology ([1, 4, 5, 6, 7, 9, 13, 14]) to this situation by introducing the Adjustable Robust Counterpart (ARC) associated with an LP of the above structure. Often the ARC is significantly less conservative than the usual Robust Counterpart (RC), however, in most cases the ARC is computationally intractable (NPhard). This difficulty is addressed by restricting the adjustable variables to be affine functions of the uncertain data. The ensuing Affinely Adjustable Robust Counterpart (AARC) problem is then shown to be, in certain important cases, equivalent to a tractable optimization problem (typically an LP or a Semidefinite problem), and in other cases, having a tight approximation which is tractable. The AARC approach is illustrated by applying it to a multistage inventory management problem.
Robust discrete optimization and network flows
 Mathematical Programming Series B
, 2003
"... We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data ..."
Abstract

Cited by 125 (23 self)
 Add to MetaCart
We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0 − 1 discrete optimization problem on n variables, then we solve the robust counterpart by solving at most n+1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0−1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. We also show that the robust counterpart of an NPhard αapproximable 0 − 1 discrete optimization problem, remains αapproximable. Finally, we propose an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.
Semidefinite optimization
 Acta Numerica
, 2001
"... Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the ..."
Abstract

Cited by 105 (2 self)
 Add to MetaCart
Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the strongest column, checking the stability of a differential inclusion, and obtaining tight bounds for hard combinatorial optimization problems. Part also derives from great advances in our ability to solve such problems efficiently in theory and in practice (perhaps “or ” would be more appropriate: the most effective computational methods are not always provably efficient in theory, and vice versa). Here we describe this class of optimization problems, give a number of examples demonstrating its significance, outline its duality theory, and discuss algorithms for solving such problems.
Robust solutions of Linear Programming problems contaminated with uncertain data
 Mathematical Programming
, 2000
"... Optimal solutions of Linear Programming problems may become severely infeasible if the nominal data is slightly perturbed. We demonstrate this phenomenon by studying 90 LPs from the wellknown NETLIB collection. We then apply the Robust Optimization methodology (BenTal and Nemirovski [13]; El Ghao ..."
Abstract

Cited by 101 (6 self)
 Add to MetaCart
Optimal solutions of Linear Programming problems may become severely infeasible if the nominal data is slightly perturbed. We demonstrate this phenomenon by studying 90 LPs from the wellknown NETLIB collection. We then apply the Robust Optimization methodology (BenTal and Nemirovski [13]; El Ghaoui et al. [5,6]) to produce “robust ” solutions of the above LPs which are in a sense immuned against uncertainty. Surprisingly, for the NETLIB problems these robust solutions nearly lose nothing in optimality. 1
Robust Portfolio Selection Problems
 Mathematics of Operations Research
, 2001
"... In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce &q ..."
Abstract

Cited by 95 (8 self)
 Add to MetaCart
In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce "uncertainty structures" for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as secondorder cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures used to estimate the market parameters. We demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.
Robust optimization  methodology and applications
, 2002
"... Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and s ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computationally tractable robust counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making RO a useful tool for realworld applications. We discuss some of these applications, specifically: antenna design, truss topology design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected by small perturbations of the data and that the RO methodology can be successfully used to overcome this phenomenon.
Robust minimum variance beamforming
 IEEE Transactions on Signal Processing
, 2005
"... Abstract—This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncerta ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
Abstract—This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncertainty in the array manifold is explicitly modeled via an ellipsoid that gives the possible values of the array for a particular look direction. We choose weights that minimize the total weighted power output of the array, subject to the constraint that the gain should exceed unity for all array responses in this ellipsoid. The robust weight selection process can be cast as a secondorder cone program that can be solved efficiently using Lagrange multiplier techniques. If the ellipsoid reduces to a single point, the method coincides with Capon’s method. We describe in detail several methods that can be used to derive an appropriate uncertainty ellipsoid for the array response. We form separate uncertainty ellipsoids for each component in the signal path (e.g., antenna, electronics) and then determine an aggregate uncertainty ellipsoid from these. We give new results for modeling the elementwise products of ellipsoids. We demonstrate the robust beamforming and the ellipsoidal modeling methods with several numerical examples. Index Terms—Ellipsoidal calculus, Hadamard product, robust beamforming, secondorder cone programming.
Handbook of semidefinite programming
"... Semidefinite programming (or SDP) has been one of the most exciting and active research areas in optimization during the 1990s. It has attracted researchers with very diverse backgrounds, including experts in convex programming, linear algebra, numerical optimization, combinatorial optimization, con ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
Semidefinite programming (or SDP) has been one of the most exciting and active research areas in optimization during the 1990s. It has attracted researchers with very diverse backgrounds, including experts in convex programming, linear algebra, numerical optimization, combinatorial optimization, control theory, and statistics. This tremendous research activity was spurred by the discovery of important applications in combinatorial optimization and control theory, the development of efficient interiorpoint algorithms for solving SDP problems, and the depth and elegance of the underlying optimization theory. This book includes nineteen chapters on the theory, algorithms, and applications of semidefinite programming. Written by the leading experts on the subject, it offers an advanced and broad overview of the current state of the field. The coverage is somewhat less comprehensive, and the overall level more advanced, than we had planned at the start of the project. In order to finish the book in a timely fashion, we have had to abandon hopes for separate chapters on some important topics (such as a discussion of SDP algorithms in the
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.