Results 1  10
of
114
Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones
, 1998
"... SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This pape ..."
Abstract

Cited by 726 (3 self)
 Add to MetaCart
SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox.
Robust Portfolio Selection Problems
 Mathematics of Operations Research
, 2001
"... In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce "u ..."
Abstract

Cited by 93 (8 self)
 Add to MetaCart
In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce "uncertainty structures" for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as secondorder cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures used to estimate the market parameters. We demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.
Robust optimization  methodology and applications
, 2002
"... Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and s ..."
Abstract

Cited by 82 (3 self)
 Add to MetaCart
Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computationally tractable robust counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making RO a useful tool for realworld applications. We discuss some of these applications, specifically: antenna design, truss topology design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected by small perturbations of the data and that the RO methodology can be successfully used to overcome this phenomenon.
Robust Solutions To Uncertain Semidefinite Programs
 SIAM J. OPTIMIZATION
, 1998
"... In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value of paramet ..."
Abstract

Cited by 77 (8 self)
 Add to MetaCart
In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist as SDPs. When the perturbation is "full," our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique and continuous (Hölderstable) with respect to the unperturbed problem's data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation, and integer programming.
Robust minimum variance beamforming
 IEEE Transactions on Signal Processing
, 2005
"... Abstract—This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncerta ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
Abstract—This paper introduces an extension of minimum variance beamforming that explicitly takes into account variation or uncertainty in the array response. Sources of this uncertainty include imprecise knowledge of the angle of arrival and uncertainty in the array manifold. In our method, uncertainty in the array manifold is explicitly modeled via an ellipsoid that gives the possible values of the array for a particular look direction. We choose weights that minimize the total weighted power output of the array, subject to the constraint that the gain should exceed unity for all array responses in this ellipsoid. The robust weight selection process can be cast as a secondorder cone program that can be solved efficiently using Lagrange multiplier techniques. If the ellipsoid reduces to a single point, the method coincides with Capon’s method. We describe in detail several methods that can be used to derive an appropriate uncertainty ellipsoid for the array response. We form separate uncertainty ellipsoids for each component in the signal path (e.g., antenna, electronics) and then determine an aggregate uncertainty ellipsoid from these. We give new results for modeling the elementwise products of ellipsoids. We demonstrate the robust beamforming and the ellipsoidal modeling methods with several numerical examples. Index Terms—Ellipsoidal calculus, Hadamard product, robust beamforming, secondorder cone programming.
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 58 (6 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
Robust Solutions To Uncertain Semidefinite Programs
, 1998
"... In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values of paramet ..."
Abstract

Cited by 55 (2 self)
 Add to MetaCart
In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist, as SDPs. When the perturbation is "full", our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique, and continuous (Hölderstable) with respect to the unperturbed problems' data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation and integer programming.
Robust meansquared error estimation in the presence of model uncertainties
 IEEE Trans. on Signal Processing
, 2005
"... Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the wor ..."
Abstract

Cited by 52 (37 self)
 Add to MetaCart
Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the worstcase meansquared error (MSE) across all possible values of x. We show that for an arbitrary choice of weighting, the optimal minimax MSE estimator can be formulated as a solution to a semidefinite programming problem (SDP), which can be solved very efficiently. We then develop a closed form expression for the minimax MSE estimator for a broad class of weighting matrices and show that it coincides with the shrunken estimator of Mayer and Willke, with a specific choice of shrinkage factor that explicitly takes the prior information into account. Next, we consider the case in which the model matrix is subject to uncertainties and seek the robust linear estimator that minimizes the worstcase MSE across all possible values of x and all possible values of the model matrix. As we show, the robust minimax MSE estimator can also be formulated as a solution to an SDP. Finally, we demonstrate through several examples that the minimax MSE estimator can significantly increase the performance over the conventional leastsquares estimator, and when the model matrix is subject to uncertainties, the robust minimax MSE estimator can lead to a considerable improvement in performance over the minimax MSE estimator. Index Terms—Data uncertainty, linear estimation, mean squared error estimation, minimax estimation, robust estimation. I.
The scenario approach to robust control design
 IEEE TRANS. AUTOM. CONTROL
, 2006
"... This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard control prob ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NPhard control problems representable by means of parameterdependent linear matrix inequalities (LMIs). It is shown in this paper that by appropriate sampling of the constraints one obtains a standard convex optimization problem (the scenario problem) whose solution is approximately feasible for the original (usually infinite) set of constraints, i.e., the measure of the set of original constraints that are violated by the scenario solution rapidly decreases to zero as the number of samples is increased. We provide an explicit and efficient bound on the number of samples required to attain apriori specified levels of probabilistic guarantee of robustness. A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomialtime solution, if robustness is intended in the proposed riskadjusted sense.
Robust distributed node localization with error management
 In Proceedings of the 7th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc’06). ACM
, 2006
"... Location knowledge of nodes in a network is essential for many tasks such as routing, cooperative sensing, or service delivery in ad hoc, mobile, or sensor networks. This paper introduces a novel iterative method ILS for node localization starting with a relatively small number of anchor nodes in a ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
Location knowledge of nodes in a network is essential for many tasks such as routing, cooperative sensing, or service delivery in ad hoc, mobile, or sensor networks. This paper introduces a novel iterative method ILS for node localization starting with a relatively small number of anchor nodes in a large network. At each iteration, nodes are localized using a leastsquares based algorithm. The computation is lightweight, fast, and anytime. To prevent error from propagating and accumulating during the iteration, the error control mechanism of the algorithm uses an error registry to select nodes that participate in the localization, based on their relative contribution to the localization accuracy. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation. The algorithm has been tested on a network of Berkeley Mica2 motes with ultrasound TOA ranging devices. We have compared the algorithm with more global methods such as MDSMAP and SDPbased algorithm both in simulation and on real hardware. The iterative localization achieves comparable location accuracy in both cases, compared to the more global methods, and has the advantage of being fully decentralized.