Results 1  10
of
77
Robust Solutions To LeastSquares Problems With Uncertain Data
, 1997
"... . We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpret ..."
Abstract

Cited by 149 (13 self)
 Add to MetaCart
. We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomialtime using semidefinite programming (SDP). We also consider the case when A; b are rational functions of an unknownbutbounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worstcase residual. We provide numerical examples, including one from robust identification and one from robust interpolation. Key Words. Leastsquares, uncertainty, robustness, secondorder cone...
Robust Solutions To Uncertain Semidefinite Programs
 SIAM J. OPTIMIZATION
, 1998
"... In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value of paramet ..."
Abstract

Cited by 82 (8 self)
 Add to MetaCart
In this paper we consider semidefinite programs (SDPs) whose data depend on some unknown but bounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible value of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist as SDPs. When the perturbation is "full," our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique and continuous (Hölderstable) with respect to the unperturbed problem's data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation, and integer programming.
Robust Solutions To Uncertain Semidefinite Programs
, 1998
"... In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values of paramet ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
In this paper we consider semidenite programs (SDPs) whose data depends on some unknownbutbounded perturbation parameters. We seek "robust" solutions to such programs, that is, solutions which minimize the (worstcase) objective while satisfying the constraints for every possible values of parameters within the given bounds. Assuming the data matrices are rational functions of the perturbation parameters, we show how to formulate sufficient conditions for a robust solution to exist, as SDPs. When the perturbation is "full", our conditions are necessary and sufficient. In this case, we provide sufficient conditions which guarantee that the robust solution is unique, and continuous (Hölderstable) with respect to the unperturbed problems' data. The approach can thus be used to regularize illconditioned SDPs. We illustrate our results with examples taken from linear programming, maximum norm minimization, polynomial interpolation and integer programming.
Robust Filtering for DiscreteTime Systems with Bounded Noise and Parametric Uncertainty
 IEEE Trans. Aut. Control
, 2001
"... This paper presents a new approach to finitehorizon guaranteed state prediction for discretetime systems affected by bounded noise and unknownbutbounded parameter uncertainty. Our framework handles possibly nonlinear dependence of the statespace matrices on the uncertain parameters. The main re ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
This paper presents a new approach to finitehorizon guaranteed state prediction for discretetime systems affected by bounded noise and unknownbutbounded parameter uncertainty. Our framework handles possibly nonlinear dependence of the statespace matrices on the uncertain parameters. The main result is that a minimal confidence ellipsoid for the state, consistent with the measured output and the uncertainty description, may be recursively computed in polynomial time, using interiorpoint methods for convex optimization. With n states, l uncertain parameters appearing linearly in the statespace matrices, with rankone matrix coefficients, the worstcase complexity grows as O(l(n + l) 3:5 ). With unstructured uncertainty in all system matrices, the worstcase complexity reduces to O(n 3:5 ).
Theory and applications of Robust Optimization
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multistage decisionmaking problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to, finance, statistics, learning, and engineering.
Parameterized LMIs in Control Theory
 SIAM J. Control Optim
, 1998
"... A wide variety of problems in control system theory fall within the class of parameterized Linear Matrix Inequalities (LMIs), that is, LMIs whose coefficients are functions of a parameter conned to a compact set. Such problems, though convex, involve an innite set of LMI constraints, hence are inher ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
A wide variety of problems in control system theory fall within the class of parameterized Linear Matrix Inequalities (LMIs), that is, LMIs whose coefficients are functions of a parameter conned to a compact set. Such problems, though convex, involve an innite set of LMI constraints, hence are inherently difficult to solve numerically. This paper investigates relaxations of parameterized LMI problems into standard LMI problems using techniques relying on directional convexity concepts. An indepth discussion of the impacts of the proposed techniques in quadratic programming, Lyapunovbased stability and performance analysis, µ analysis and Linear Parameter Varying control is provided. Illustrative examples are given to demonstrate the usefulness and practicality of the approach.
ParameterDependent Lyapunov Functions for Robust Control of Systems with Real Parametric Uncertainty
 IEEE TRANS. AUT. CONTROL
, 1995
"... This paper is concerned with the robust control problem of plants subject to real parametric uncertainties. The proposed technique builds upon the use of parameterdependent quadratic Lyapunov functions. Such Lyapunov functions are used to derive sufficient conditions for the existence of controller ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
This paper is concerned with the robust control problem of plants subject to real parametric uncertainties. The proposed technique builds upon the use of parameterdependent quadratic Lyapunov functions. Such Lyapunov functions are used to derive sufficient conditions for the existence of controllers ensuring robust performance of the closedloop system. These conditions lead to a complete synthesis technique, based on a relaxation algorithm reminiscent of µsynthesis schemes. It alternates analysis phases and synthesis phases both characterized by tractable conditions in the form of Linear Matrix Inequalities (LMIs). The major advantage of the proposed technique is to produce robust controllers whose order is the same as the original plant. It allows to bypass the frequency sampling and curve fitting steps often critical in µ synthesis algorithms. A simple illustrative application demonstrates that the approach in this paper compares favorably to traditional µsynthesis.
Branch and Bound Algorithm for Computing the Minimum Stability Degree of Parameterdependent Linear Systems
, 1991
"... We consider linear systems with unspecified parameters that lie between given upper and lower bounds. Except for a few special cases, the computation of many quantities of interest for such systems can be performed only through an exhaustive search in parameter space. We present a general branch and ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
We consider linear systems with unspecified parameters that lie between given upper and lower bounds. Except for a few special cases, the computation of many quantities of interest for such systems can be performed only through an exhaustive search in parameter space. We present a general branch and bound algorithm that implements this search in a systematic manner and apply it to computing the minimum stability degree. 1 Introduction 1.1 Notation R (C) denotes the set of real (complex) numbers. For c 2 C, Re c is the real part of c. The set of n \Theta n matrices with real (complex) entries is denoted R n\Thetan (C n\Thetan ). P T stands for the transpose of P , and P , the complex conjugate transpose. I denotes the identity matrix, with size determined from context. For a matrix P 2 R n\Thetan (or C n\Thetan ), i (P ); 1 i n denotes the ith eigenvalue of P (with no particular ordering). oe max (P ) denotes the maximum singular value (or spectral norm) of P , define...
Partially augmented Lagrangian method for matrix inequalities
 SIAM J. on Optimization
"... Pierre Apkarian k Abstract We discuss a partially augmented Lagrangian method for optimization programs with matrix inequality constraints. A global convergence result is obtained. Applications to hard problems in feedback control are presented to validate the method numerically. ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
Pierre Apkarian k Abstract We discuss a partially augmented Lagrangian method for optimization programs with matrix inequality constraints. A global convergence result is obtained. Applications to hard problems in feedback control are presented to validate the method numerically.
Control System Analysis And Synthesis Via Linear Matrix Inequalities
, 1993
"... A wide variety of problems in systems and control theory can be cast or recast as convex problems that involve linear matrix inequalities (LMIs). For a few very special cases there are "analytical solutions" to these problems, but in general they can be solved numerically very efficiently. In many c ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
A wide variety of problems in systems and control theory can be cast or recast as convex problems that involve linear matrix inequalities (LMIs). For a few very special cases there are "analytical solutions" to these problems, but in general they can be solved numerically very efficiently. In many cases the inequalities have the form of simultaneous Lyapunov or algebraic Riccati inequalities; such problems can be solved in a time that is comparable to the time required to solve the same number of Lyapunov or Algebraic Riccati equations. Therefore the computational cost of extending current control theory that is based on the solution of algebraic Riccati equations to a theory based on the solution of (multiple, simultaneous) Lyapunov or Riccati inequalities is modest. Examples include: multicriterion LQG, synthesis of linear state feedback for multiple or nonlinear plants ("multimodel control"), optimal transfer matrix realization, norm scaling, synthesis of multipliers for Popovlike...