Results 1  10
of
32
Robust optimization  methodology and applications
, 2002
"... Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and s ..."
Abstract

Cited by 84 (3 self)
 Add to MetaCart
Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computationally tractable robust counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making RO a useful tool for realworld applications. We discuss some of these applications, specifically: antenna design, truss topology design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected by small perturbations of the data and that the RO methodology can be successfully used to overcome this phenomenon.
Uncertain convex programs: Randomized solutions and confidence levels
 Mathematical Programming
, 2005
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance ’ parameter. A recently emerged successful paradigm for attacking these problems is robust optimization, where one seeks a solution which simultaneously ..."
Abstract

Cited by 60 (7 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance ’ parameter. A recently emerged successful paradigm for attacking these problems is robust optimization, where one seeks a solution which simultaneously satisfies all possible constraint instances. In practice, however, the robust approach is effective only for problem families with rather simple dependence on the instance parameter (such as affine or polynomial), and leads in general to conservative answers, since the solution is usually computed by transforming the original semiinfinite problem into a standard one, by means of relaxation techniques. In this paper, we take an alternative ‘randomized ’ or ‘scenario ’ approach: by randomly sampling the uncertainty parameter, we substitute the original infinite constraint set with a finite set of N constraints. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
M.: The scenario approach to robust control design
 IEEE Trans. Autom. Control
, 2006
"... Abstract—This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard con ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
Abstract—This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard control problems representable by means of parameterdependent linear matrix inequalities (LMIs). It is shown in this paper that by appropriate sampling of the constraints one obtains a standard convex optimization problem (the scenario problem) whose solution is approximately feasible for the original (usually infinite) set of constraints, i.e., the measure of the set of original constraints that are violated by the scenario solution rapidly decreases to zero as the number of samples is increased. We provide an explicit and efficient bound on the number of samples required to attain apriori specified levels of probabilistic guarantee of robustness. A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomialtime solution, if robustness is intended in the proposed riskadjusted sense. Index Terms—Probabilistic robustness, randomized algorithms, robust control, robust convex optimization, uncertainty. I.
Optimal Solutions for Sparse Principal Component Analysis
"... Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applica ..."
Abstract

Cited by 41 (8 self)
 Add to MetaCart
Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. We formulate a new semidefinite relaxation to this problem and derive a greedy algorithm that computes a full set of good solutions for all target numbers of non zero coefficients, with total complexity O(n 3), where n is the number of variables. We then use the same relaxation to derive sufficient conditions for global optimality of a solution, which can be tested in O(n 3) per pattern. We discuss applications in subset selection and sparse recovery and show on artificial examples and biological data that our algorithm does provide globally optimal solutions in many cases.
Theory and applications of Robust Optimization
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multistage decisionmaking problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to, finance, statistics, learning, and engineering.
Approximation bounds for quadratic optimization with homogeneous quadratic constraints
 SIAM J. Optim
, 2007
"... Abstract. We consider the NPhard problem of finding a minimum norm vector in ndimensional real or complex Euclidean space, subject to m concave homogeneous quadratic constraints. We show that a semidefinite programming (SDP) relaxation for this nonconvex quadratically constrained quadratic program ..."
Abstract

Cited by 23 (12 self)
 Add to MetaCart
Abstract. We consider the NPhard problem of finding a minimum norm vector in ndimensional real or complex Euclidean space, subject to m concave homogeneous quadratic constraints. We show that a semidefinite programming (SDP) relaxation for this nonconvex quadratically constrained quadratic program (QP) provides an O(m 2) approximation in the real case and an O(m) approximation in the complex case. Moreover, we show that these bounds are tight up to a constant factor. When the Hessian of each constraint function is of rank 1 (namely, outer products of some given socalled steering vectors) and the phase spread of the entries of these steering vectors are bounded away from π/2, we establish a certain “constant factor ” approximation (depending on the phase spread but independent of m and n) for both the SDP relaxation and a convex QP restriction of the original NPhard problem. Finally, we consider a related problem of finding a maximum norm vector subject to m convex homogeneous quadratic constraints. We show that an SDP relaxation for this nonconvex QP provides an O(1 / ln(m)) approximation, which is analogous to a result of Nemirovski et al. [Math. Program., 86 (1999), pp. 463–473] for the real case. Key words. semidefinite programming relaxation, nonconvex quadratic optimization, approximation bound
Selected topics in robust convex optimization
 Math. Prog. B, this issue
, 2007
"... Abstract Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, (2) tractability of robust counterparts, (3) links between RO and traditional chance constrained settings of problems with stochastic data, and (4) a novel generic application of the RO methodology in Robust Linear Control. Keywords optimization under uncertainty · robust optimization · convex programming · chance constraints · robust linear control
Ellipsoidal bounds for uncertain linear equations and dynamical systems
 Automatica
, 2004
"... In this paper, we discuss semidefinite relaxation techniques for computing minimal size ellipsoids that bound the solution set of a system of uncertain linear equations. The proposed technique is based on the combination of a quadratic embedding of the uncertainty, and the Sprocedure. This formulat ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
In this paper, we discuss semidefinite relaxation techniques for computing minimal size ellipsoids that bound the solution set of a system of uncertain linear equations. The proposed technique is based on the combination of a quadratic embedding of the uncertainty, and the Sprocedure. This formulation leads to convex optimization problems that can be essentially solved in O(n 3)—n being the size of unknown vector — by means of suitable interior point barrier methods, as well as to closed form results in some particular cases. We further show that the uncertain linear equations paradigm can be directly applied to various statebounding problems for dynamical systems subject to setvalued noise and model uncertainty.
Matrix SumofSquares Relaxations for Robust SemiDefinite Programs
 Math. Program
, 2006
"... We consider robust semidefinite programs which depend polynomially or rationally on some uncertain parameter that is only known to be contained in a set with a polynomial matrix inequality description. On the basis of matrix sumofsquares decompositions, we suggest a systematic procedure to con ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We consider robust semidefinite programs which depend polynomially or rationally on some uncertain parameter that is only known to be contained in a set with a polynomial matrix inequality description. On the basis of matrix sumofsquares decompositions, we suggest a systematic procedure to construct a family of linear matrix inequality relaxations for computing upper bounds on the optimal value of the corresponding robust counterpart. With a novel matrixversion of Putinar’s sumofsquares representation for positive polynomials on compact semialgebraic sets, we prove asymptotic exactness of the relaxation family under a suitable constraint qualification. If the uncertainty region is a compact polytope, we provide a new duality proof for the validity of Putinar’s constraint qualification with an a priori degree bound on the polynomial certificates. Finally, we point out the consequences of our results for constructing relaxations based on the socalled fullblock Sprocedure, which allows to apply recently developed tests in order to computationally verify the exactness of possibly smallsized relaxations.
Extended matrix cube theorems with applications to µtheory in control
 Mathematics of Operations Research
, 2003
"... We study semiinfinite systems of Linear Matrix Inequalities which are generically NPhard. For these systems, we introduce computationally tractable approximations and derive quantitative guarantees of their quality. As applications, we discuss the problem of maximizing a Hermitian quadratic form o ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We study semiinfinite systems of Linear Matrix Inequalities which are generically NPhard. For these systems, we introduce computationally tractable approximations and derive quantitative guarantees of their quality. As applications, we discuss the problem of maximizing a Hermitian quadratic form over the complex unit cube and the problem of bounding the complex structured singular value. With the help of our complex Matrix Cube Theorem we demonstrate that the standard scaling upper bound on µ(M) is a tight upper bound on the largest level of structured perturbations of the matrix M for which all perturbed matrices share a common Lyapunov certificate for the (discrete time) stability. 1