Results 1  10
of
31
Robust meansquared error estimation in the presence of model uncertainties
 IEEE Trans. on Signal Processing
, 2005
"... Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the wor ..."
Abstract

Cited by 57 (34 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the worstcase meansquared error (MSE) across all possible values of x. We show that for an arbitrary choice of weighting, the optimal minimax MSE estimator can be formulated as a solution to a semidefinite programming problem (SDP), which can be solved very efficiently. We then develop a closed form expression for the minimax MSE estimator for a broad class of weighting matrices and show that it coincides with the shrunken estimator of Mayer and Willke, with a specific choice of shrinkage factor that explicitly takes the prior information into account. Next, we consider the case in which the model matrix is subject to uncertainties and seek the robust linear estimator that minimizes the worstcase MSE across all possible values of x and all possible values of the model matrix. As we show, the robust minimax MSE estimator can also be formulated as a solution to an SDP. Finally, we demonstrate through several examples that the minimax MSE estimator can significantly increase the performance over the conventional leastsquares estimator, and when the model matrix is subject to uncertainties, the robust minimax MSE estimator can lead to a considerable improvement in performance over the minimax MSE estimator. Index Terms—Data uncertainty, linear estimation, mean squared error estimation, minimax estimation, robust estimation. I.
Linear minimax regret estimation of deterministic parameters with bounded data uncertainties
 IEEE Trans. Signal Process
, 2004
"... Abstract—We develop a new linear estimator for estimating an unknown parameter vector x in a linear model in the presence of bounded data uncertainties. The estimator is designed to minimize the worstcase regret over all bounded data vectors, namely, the worstcase difference between the meansquar ..."
Abstract

Cited by 41 (28 self)
 Add to MetaCart
(Show Context)
Abstract—We develop a new linear estimator for estimating an unknown parameter vector x in a linear model in the presence of bounded data uncertainties. The estimator is designed to minimize the worstcase regret over all bounded data vectors, namely, the worstcase difference between the meansquared error (MSE) attainable using a linear estimator that does not know the true parameters x and the optimal MSE attained using a linear estimator that knows x. We demonstrate through several examples that the minimax regret estimator can significantly increase the performance over the conventional leastsquares estimator, as well as several other leastsquares alternatives. Index Terms—Deterministic parameter estimation, linear estimation, mean squared error bounded data uncertainties estimation, minimax estimation, regret. I.
MMSE whitening and subspace whitening
 IEEE Trans. Inform. Theory
"... Abstract—This correspondence develops a linear whitening transformation that minimizes the meansquared error (MSE) between the original and whitened data, i.e.,one that results in a white output that is as close as possible to the input, in an MSE sense. When the covariance matrix of the data is no ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
Abstract—This correspondence develops a linear whitening transformation that minimizes the meansquared error (MSE) between the original and whitened data, i.e.,one that results in a white output that is as close as possible to the input, in an MSE sense. When the covariance matrix of the data is not invertible, the whitening transformation is designed to optimally whiten the data on a subspace in which it is contained. The optimal whitening transformation is developed both for the case of finitelength data vectors and infinitelength signals. Index Terms—Meansquared error (MSE) whitening, subspace whitening, whitening. I.
Robust MeanSquared Error Estimation of Multiple . . .
, 2005
"... This paper is a continuation of the work in [11] and [2] on the problem of estimating by a linear estimator, N unobservable input vectors, undergoing the same linear transformation, from noisecorrupted observable output vectors. Whereas in the aforementioned papers, only the matrix representing t ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
This paper is a continuation of the work in [11] and [2] on the problem of estimating by a linear estimator, N unobservable input vectors, undergoing the same linear transformation, from noisecorrupted observable output vectors. Whereas in the aforementioned papers, only the matrix representing the linear transformation was assumed uncertain, here we are concerned with the case in which the second order statistics of the noise vectors (i.e., their covariance matrices) are also subjected to uncertainty. We seek a robust meansquared error estimator immuned against both sources of uncertainty. We show that the optimal robust meansquared error estimator has a special form represented by an elementary block circulant matrix, and moreover when the uncertainty sets are ellipsoidallike, the problem of finding the optimal estimator matrix can be reduced to solving an explicit semidefinite programming problem, whose size is independent of N.
Comparing between estimation approaches: Admissible and dominating linear estimators
 August 2005, EE Dept., Technion–Israel Institute of Technology
"... We treat the problem of evaluating the performance of linear estimators for estimating a deterministic parameter vector x in a linear regression model, with the meansquared error (MSE) as the performance measure. Since the MSE depends on the unknown vector x, direct comparison between estimators is ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
(Show Context)
We treat the problem of evaluating the performance of linear estimators for estimating a deterministic parameter vector x in a linear regression model, with the meansquared error (MSE) as the performance measure. Since the MSE depends on the unknown vector x, direct comparison between estimators is a difficult problem. Here we consider a framework for examining the MSE of different linear estimation approaches based on the concepts of admissible and dominating estimators. We develop a general procedure for determining whether or not a linear estimator is MSE admissible, and for constructing an estimator strictly dominating a given inadmissible method, so that its MSE is smaller for all x. In particular we show that both problems can be addressed in a unified manner for arbitrary constraint sets on x by considering a certain convex optimization problem. We then demonstrate the details of our method for the case in which x is constrained to an ellipsoidal set, and for unrestricted choices of x. As a by product of our results, we derive a closed form solution for the minimax MSE estimator on an ellipsoid, which is valid for arbitrary model parameters, as long as the signaltonoiseratio exceeds a certain threshold. Key Words—Linear estimation, regression, admissible estimators, dominating estimators, meansquared error (MSE) estimation, minimax MSE estimation.
Leastsquares inner product shaping
, 2002
"... We develop methods that construct an optimal set of vectors with a specified inner product structure, from a given set of vectors in a complex Hilbert space. The optimal vectors are chosen to minimize the sum of the squared norms of the errors between the constructed vectors and the given vectors. F ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We develop methods that construct an optimal set of vectors with a specified inner product structure, from a given set of vectors in a complex Hilbert space. The optimal vectors are chosen to minimize the sum of the squared norms of the errors between the constructed vectors and the given vectors. Four special cases are considered. In the first, the constructed vectors are orthonormal. In the second, they are orthogonal. In the third, the Gram matrix of inner products of the constructed vectors is a circulant matrix. As we show, the vectors form a cyclic set. In the fourth, the Gram matrix has the property that the rows are all permutations of each other. The constructed vectors are shown to be geometrically uniform.
Optimal sequential energy allocation for inverse problems
 IN SIGNAL PROCESSING
, 2007
"... This paper investigates the advantages of adaptive waveform amplitude design for estimating parameters of an unknown channel/medium under average energy constraints. We present a statistical framework for sequential design (e.g., design of waveforms in adaptive sensing) of experiments that improves ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
This paper investigates the advantages of adaptive waveform amplitude design for estimating parameters of an unknown channel/medium under average energy constraints. We present a statistical framework for sequential design (e.g., design of waveforms in adaptive sensing) of experiments that improves parameter estimation (e.g., unknown channel parameters) performance in terms of reduction in meansquared error (MSE). We treat an time step design problem for a linear Gaussian model where the shape of the input design vectors (one per time step) remains constant and their amplitudes are chosen as a function of past measurements to minimize MSE. For aP, we derive the optimal energy allocation at the second step as a function of the first measurement. Our adaptive twostep strategy yields an MSE improvement of at least 1.65 dB relative to the optimal nonadaptive strategy, but is not implementable since it requires knowledge of the noise amplitude. We then present an implementable design for the twostep strategy which asymptotically achieves optimal performance. Motivated by the optimal twostep strategy, we propose a suboptimal adaptivestep energy allocation strategy that can achieve an MSE improvement of more than 5 dB for aSH. We demonstrate our general approach in the context of MIMO channel estimation and inverse scattering problems.
A covariance shaping framework for linear multiuser detection
 IEEE Trans. Inf. Theory
, 2005
"... Abstract—A new class of linear multiuser receivers, referred to as the covariance shaping multiuser (CSMU) receiver, is proposed, for suppression of interference in multiuser wireless communication systems. This class of receivers is based on the recently proposed covariance shaping leastsquares es ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract—A new class of linear multiuser receivers, referred to as the covariance shaping multiuser (CSMU) receiver, is proposed, for suppression of interference in multiuser wireless communication systems. This class of receivers is based on the recently proposed covariance shaping leastsquares estimator, and is designed to minimize the total variance of the weighted error between the receiver output and the observed signal, subject to the constraint that the covariance of the noise component in the receiver output is proportional to a given covariance matrix, so that we control the dynamic range and spectral shape of the output noise. Some of the wellknown linear multiuser receivers are shown to be special cases of the CSMU receiver. This allows us to interpret these receivers as the receivers that minimize the total error variance in the observations, among all linear receivers with the same output noise covariance, and to analyze their performance in a unified way. We derive exact and approximate expressions for the probability of bit error, as well as the asymptotic signaltointerference+noise ratio in the large system limit. We also characterize the spectral efficiency versus energyperinformation bit of the CSMU receiver in the wideband regime. Finally, we consider a special case of the CSMU receiver, equivalent to a mismatched minimum meansquared error (MMSE) receiver, in which the channel signaltonoise ratio (SNR) is not known precisely. Using our general performance analysis results, we characterize the performance of the mismatched MMSE receiver. We then treat the case in which the SNR is known to lie in a given uncertainty range, and develop a robust mismatched MMSE receiver whose performance is very close to that of the MMSE receiver over the entire uncertainty range. Index Terms—Codedivision multiple access (CDMA), covariance shaping, mismatched minimum meansquared error (MMSE), multiuser detection, noise shaping, robust MMSE. I.
A pretest like estimator dominating the leastsquares method
 J. Statist. Plan. Inference
, 2008
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
A minimax chebyshev estimator for bounded error estimation
"... Abstract—We develop a nonlinear minimax estimator for the classical linear regression model assuming that the true parameter vector lies in an intersection of ellipsoids. We seek an estimate that minimizes the worstcase estimation error over the given parameter set. Since this problem is intractabl ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract—We develop a nonlinear minimax estimator for the classical linear regression model assuming that the true parameter vector lies in an intersection of ellipsoids. We seek an estimate that minimizes the worstcase estimation error over the given parameter set. Since this problem is intractable, we approximate it using semidefinite relaxation, and refer to the resulting estimate as the relaxed Chebyshev center (RCC). We show that the RCC is unique and feasible, meaning it is consistent with the prior information. We then prove that the constrained leastsquares (CLS) estimate for this problem can also be obtained as a relaxation of the Chebyshev center, that is looser than the RCC. Finally, we demonstrate through simulations that the RCC can significantly improve the estimation error over the CLS method. Index Terms—Bounded error estimation, Chebyshev center, constrained leastsquares, semidefinite programming, semidefinite relaxation. I.