Results 1  10
of
33
Robust solutions to uncertain linear programs
 OR Letters
, 1999
"... We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (”nonadjustable variables”), while the other part are variables that can be chosen after the realization ..."
Abstract

Cited by 232 (14 self)
 Add to MetaCart
We consider linear programs with uncertain parameters, lying in some prescribed uncertainty set, where part of the variables must be determined before the realization of the uncertain parameters (”nonadjustable variables”), while the other part are variables that can be chosen after the realization (”adjustable variables”). We extend the Robust Optimization methodology ([1, 4, 5, 6, 7, 9, 13, 14]) to this situation by introducing the Adjustable Robust Counterpart (ARC) associated with an LP of the above structure. Often the ARC is significantly less conservative than the usual Robust Counterpart (RC), however, in most cases the ARC is computationally intractable (NPhard). This difficulty is addressed by restricting the adjustable variables to be affine functions of the uncertain data. The ensuing Affinely Adjustable Robust Counterpart (AARC) problem is then shown to be, in certain important cases, equivalent to a tractable optimization problem (typically an LP or a Semidefinite problem), and in other cases, having a tight approximation which is tractable. The AARC approach is illustrated by applying it to a multistage inventory management problem.
Robust meansquared error estimation in the presence of model uncertainties
 IEEE Trans. on Signal Processing
, 2005
"... Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the wor ..."
Abstract

Cited by 52 (37 self)
 Add to MetaCart
Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the worstcase meansquared error (MSE) across all possible values of x. We show that for an arbitrary choice of weighting, the optimal minimax MSE estimator can be formulated as a solution to a semidefinite programming problem (SDP), which can be solved very efficiently. We then develop a closed form expression for the minimax MSE estimator for a broad class of weighting matrices and show that it coincides with the shrunken estimator of Mayer and Willke, with a specific choice of shrinkage factor that explicitly takes the prior information into account. Next, we consider the case in which the model matrix is subject to uncertainties and seek the robust linear estimator that minimizes the worstcase MSE across all possible values of x and all possible values of the model matrix. As we show, the robust minimax MSE estimator can also be formulated as a solution to an SDP. Finally, we demonstrate through several examples that the minimax MSE estimator can significantly increase the performance over the conventional leastsquares estimator, and when the model matrix is subject to uncertainties, the robust minimax MSE estimator can lead to a considerable improvement in performance over the minimax MSE estimator. Index Terms—Data uncertainty, linear estimation, mean squared error estimation, minimax estimation, robust estimation. I.
Strong Duality in Nonconvex Quadratic Optimization with Two Quadratic Constraints
 SIAM Journal on Optimization
"... Abstract. We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connecti ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
Abstract. We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connection between the image of the real and complex spaces under a quadratic mapping, which together with the results in the complex case lead to a condition that ensures strong duality in the real setting. Preliminary numerical simulations suggest that for random instances of the extended trust region subproblem, the sufficient condition is satisfied with a high probability. Furthermore, we show that the sufficient condition is always satisfied in two classes of nonconvex quadratic problems. Finally, we discuss an application of our results to robust least squares problems.
A Framework for StateSpace Estimation with Uncertain Models
 IEEE Trans. Auto. Contr
, 2001
"... This paper develops a framework for statespace estimation when the parameters of the underlying linear model are subject to uncertainties. Compared with existing robust filters, the proposed filters perform regularization rather than deregularization. It is shown that, under certain stabilizabilit ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper develops a framework for statespace estimation when the parameters of the underlying linear model are subject to uncertainties. Compared with existing robust filters, the proposed filters perform regularization rather than deregularization. It is shown that, under certain stabilizability and detectability conditions, the steadystate filters are stable and that, for quadraticallystable models, the filters guarantee a bounded error variance. Moreover, the resulting filter structures are similar to various (time and measurementupdate, prediction, and information) forms of the Kalman filter, albeit ones that operate on corrected parameters rather than on the given nominal parameters. Simulation results and comparisons with H1 , guaranteedcost, and setvalued state estimation filters are provided.
Minimax meansquared error estimation of multichannel signals
 IEEE TRANS. INFORM. THEORY
, 2005
"... We consider the problem of multichannel estimation, in which we seek to estimate N deterministic input vectors xk that are observed through a set of linear transformations and corrupted by additive noise, where the linear transformations are subjected to uncertainty. To estimate the inputs xk we pro ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
We consider the problem of multichannel estimation, in which we seek to estimate N deterministic input vectors xk that are observed through a set of linear transformations and corrupted by additive noise, where the linear transformations are subjected to uncertainty. To estimate the inputs xk we propose a minimax meansquared error (MSE) approach in which we seek the linear estimator that minimizes the worstcase MSE over the uncertainty region, where we assume that the weighted norm of each of the inputs xk is bounded and that each of the linear transformations is perturbed by a bounded norm disturbance. For an arbitrary choice of weighting, we show that assuming a block circulant structure on the resulting model matrix, the minimax MSE estimator can be formulated as a solution to a semidefinite programming problem (SDP), which can be solved efficiently. For an Euclidean norm bound on xk, the SDP is reduced to a simple convex program with N + 1 unknowns. Finally, we demonstrate through examples, that the minimax MSE estimator can significantly increase the performance over conventional methods.
A Stable and Efficient Algorithm for the Indefinite Linear LeastSquares Problem
 SIAM J. Matrix Anal. Appl
, 1998
"... We develop an algorithm for the solution of indefinite leastsquares problems. Such problems arise in robust estimation, filtering, and control, and numerically stable solutions have been lacking. The algorithm developed herein involves the QR factorization of the coefficient matrix and is provably ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We develop an algorithm for the solution of indefinite leastsquares problems. Such problems arise in robust estimation, filtering, and control, and numerically stable solutions have been lacking. The algorithm developed herein involves the QR factorization of the coefficient matrix and is provably numerically stable. keywords Indefinite leastsquares problems, error analysis, backward stability. 1 Introduction Many optimization criteria have been used for parameter estimation, starting with the standard leastsquares formulation of Gauss (ca. 1795) and moving to more recent works on total leastsquares (TLS) and robust (or H 1 ) estimation (see, e.g., [3, 4, 6, 7, 8, 9]). The latter formulations have been motivated by an increasing interest in estimators that are less sensitive to data uncertainties and measurement errors. They can both be shown to require the minimization of indefinite quadratic forms, where the standard inner product of two vectors, say a T b, is replaced by an...
likelihood estimation in linear models with Gaussian model matrix
 IEEE Signal Process. Lett
"... Abstract—We consider the problem of estimating an unknown deterministic parameter vector in a linear model with a Gaussian model matrix. We derive the maximum likelihood (ML) estimator for this problem and show that it can be found using a simple linesearch over a unimodal function that can be effi ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract—We consider the problem of estimating an unknown deterministic parameter vector in a linear model with a Gaussian model matrix. We derive the maximum likelihood (ML) estimator for this problem and show that it can be found using a simple linesearch over a unimodal function that can be efficiently evaluated. We then discuss the similarity between the ML, the total least squares (TLS), the regularized TLS, and the expected least squares estimators. Index Terms—Errors in variables (EIV), linear models, maximum likelihood (ML) estimation, random model matrix, total least squares (TLS). I.
Data Fitting Problems With Bounded Uncertainties In The Data
 SIAM J. MATRIX ANAL. APPL
, 2001
"... An analysis of a class of data tting problems, where the data uncertainties are subject to known bounds, is given in a very general setting. It is shown how such problems can be posed in a computationally convenient form, and the connection with other more conventional data fitting problems is exami ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
An analysis of a class of data tting problems, where the data uncertainties are subject to known bounds, is given in a very general setting. It is shown how such problems can be posed in a computationally convenient form, and the connection with other more conventional data fitting problems is examined. The problems have attracted interest so far in the special case when the underlying norm is the least squares norm. Here the special structure can be exploited to computational advantage, and we include some observations which contribute to algorithmic development for this particular case. We also consider some variants of the main problems and show how these too can be posed in a form which facilitates their numerical solution.
Efficient Algorithms for Least Squares Type Problems with Bounded Uncertainties
 in Recent Advances in Total Least Squares Techniques and ErrorsinVariables Modeling, ed
, 1997
"... We formulate and solve new least squares type problems for parameter estimation in the presence of bounded data uncertainties. The new methods are suitable when a priori bounds on the uncertain data are available, and their solutions lead to more meaningful results especially when compared with othe ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We formulate and solve new least squares type problems for parameter estimation in the presence of bounded data uncertainties. The new methods are suitable when a priori bounds on the uncertain data are available, and their solutions lead to more meaningful results especially when compared with other methods such as total least squares and robust estimation. Their superior performance is due to the fact that the new methods guarantee that the effect of the uncertainties will never be unnecessarily overestimated, beyond what is reasonably assumed by the a priori bounds. Geometric interpretations of the solutions are provided, along with closedform expressions for them.
Robust Maximum Likelihood Estimation in the Linear Model
, 2000
"... This paper addresses the problem of maximum likelihood parameter estimation in linear models affected by gaussian noise, whose mean and covariance matrix are uncertain. The proposed estimate maximizes a lower bound on the worstcase (with respect to the uncertainty) likelihood of the measured sample ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper addresses the problem of maximum likelihood parameter estimation in linear models affected by gaussian noise, whose mean and covariance matrix are uncertain. The proposed estimate maximizes a lower bound on the worstcase (with respect to the uncertainty) likelihood of the measured sample, and is computed solving a semidefinite optimization problem (SDP). The problem of linear robust estimation is also studied in the paper, and the the statistical and optimality properties of the resulting linear estimator are discussed.