Results 1  10
of
17
Robust Solutions To LeastSquares Problems With Uncertain Data
, 1997
"... . We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpret ..."
Abstract

Cited by 200 (14 self)
 Add to MetaCart
. We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomialtime using semidefinite programming (SDP). We also consider the case when A; b are rational functions of an unknownbutbounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worstcase residual. We provide numerical examples, including one from robust identification and one from robust interpolation. Key Words. Leastsquares, uncertainty, robustness, secondorder cone...
Estimation of Model Quality
 Automatica
, 1994
"... This paper gives an introduction to recent work on the problem of quantifying errors in the estimation of models for dynamic systems. This is a very large field. We therefore concentrate on approaches that have been motivated by the need for reliable models for control system design. This will invol ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
(Show Context)
This paper gives an introduction to recent work on the problem of quantifying errors in the estimation of models for dynamic systems. This is a very large field. We therefore concentrate on approaches that have been motivated by the need for reliable models for control system design. This will involve a discussion of efforts which go under the titles of `Estimation in H1 ', `Worst Case Estimation', `Estimation in ` 1 ', `Information Based Complexity', and `Stochastic Embedding of Undermodelling'. A central theme of this survey is to examine these new methods with reference to the classic bias/variance tradeoff in model structure selection. Technical Report EE9437 Centre for Industrial Control Science and Department of Electrical and Computer Engineering, University of Newcastle, Callaghan 2308, AUSTRALIA 1 Introduction Our aim in this paper is to survey an area of research which has flourished in recent years. The common denominator of this work is that of finding system identificat...
Error Estimations For Indirect Measurements: Randomized Vs. Deterministic Algorithms For "BlackBox" Programs
 Handbook on Randomized Computing, Kluwer, 2001
, 2000
"... In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure ..."
Abstract

Cited by 33 (15 self)
 Add to MetaCart
(Show Context)
In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1 ; : : : ; xn , and then by using the known relation between x i and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates e x1 ; : : : ; e xn for x i into an estimate for y is only available as a black box (with no source code at hand). We consider this problem in two settings: statistical, when measurements errors \Deltax i = e x i \Gamma x i are inde...
Astrogeometry, Error Estimation, and Other Applications of SetValued Analysis
 ACM SIGNUM Newsletter
, 1996
"... In many reallife application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets: ffl In image processing (e.g., in astronomy), the desired blackand ..."
Abstract

Cited by 29 (27 self)
 Add to MetaCart
(Show Context)
In many reallife application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets: ffl In image processing (e.g., in astronomy), the desired blackandwhite image is, from the mathematical viewpoint, a set.
Bayesian System Identification via Markov Chain Monte Carlo Techniques
, 2009
"... The work here explores new numerical methods for supporting a Bayesian approach to parameter estimation of dynamic systems. This is primarily motivated by the goal of providing accurate quantification of estimation error that is valid for arbitrary, and hence even very short length data records. The ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The work here explores new numerical methods for supporting a Bayesian approach to parameter estimation of dynamic systems. This is primarily motivated by the goal of providing accurate quantification of estimation error that is valid for arbitrary, and hence even very short length data records. The main innovation is the employment of the Metropolis–Hastings algorithm to construct an ergodic Markov chain with invariant density equal to the required posterior density. Monte–Carlo analysis of samples from this chain then provide a means for efficiently and accurately computing posteriors for model parameters and arbitrary functions of them.
REGULARIZATION IN REGRESSION WITH BOUNDED NOISE: A Chebyshev Center Approach
, 2007
"... We consider the problem of estimating a vector z in the regression model b = Az+w, where w is an unknown but bounded noise. As in many regularization schemes, we assume that an upper bound on the norm of z is available. To estimate z we propose a relaxation of the Chebyshev center, which is the vect ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We consider the problem of estimating a vector z in the regression model b = Az+w, where w is an unknown but bounded noise. As in many regularization schemes, we assume that an upper bound on the norm of z is available. To estimate z we propose a relaxation of the Chebyshev center, which is the vector that minimizes the worstcase estimation error over all feasible vectors z. Relying on recent results regarding strong duality of nonconvex quadratic optimization problems with two quadratic constraints, we prove that in the complex domain our approach leads to the exact Chebyshev center. In the real domain, this strategy results in a “pretty good ” approximation of the true Chebyshev center. As we show, our estimate can be viewed as a Tikhonov regularization with a special choice of parameter that can be found efficiently by solving a convex optimization problem with two variables or a semidefinite program with three variables, regardless of the problem size. When the norm constraint on z is a Euclidean one, the problem reduces to a singlevariable convex minimization problem. We then demonstrate via numerical examples that our estimator can outperform other conventional methods, such as leastsquares and regularized leastsquares, with respect to the estimation error. Finally, we extend our methodology to other feasible parameter sets, showing that the total leastsquares (TLS) and regularized TLS can be obtained as special cases of our general approach.
A minimax chebyshev estimator for bounded error estimation
"... Abstract—We develop a nonlinear minimax estimator for the classical linear regression model assuming that the true parameter vector lies in an intersection of ellipsoids. We seek an estimate that minimizes the worstcase estimation error over the given parameter set. Since this problem is intractabl ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We develop a nonlinear minimax estimator for the classical linear regression model assuming that the true parameter vector lies in an intersection of ellipsoids. We seek an estimate that minimizes the worstcase estimation error over the given parameter set. Since this problem is intractable, we approximate it using semidefinite relaxation, and refer to the resulting estimate as the relaxed Chebyshev center (RCC). We show that the RCC is unique and feasible, meaning it is consistent with the prior information. We then prove that the constrained leastsquares (CLS) estimate for this problem can also be obtained as a relaxation of the Chebyshev center, that is looser than the RCC. Finally, we demonstrate through simulations that the RCC can significantly improve the estimation error over the CLS method. Index Terms—Bounded error estimation, Chebyshev center, constrained leastsquares, semidefinite programming, semidefinite relaxation. I.
Rapproachment between Bounded Error and Stochastic Estimation Theory
 International Journal of Adaptive Control and Signal Processing, To Appear
, 1994
"... There has been a recent surge of interest in estimation theory based on very simple noise descriptions; for example, the absolute value of a noise sample is simply bounded. To date this line of work has not been critically compared to preexisting work on stochastic estimation theory which uses more ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
There has been a recent surge of interest in estimation theory based on very simple noise descriptions; for example, the absolute value of a noise sample is simply bounded. To date this line of work has not been critically compared to preexisting work on stochastic estimation theory which uses more complicated noise descriptions. The present paper attempts to redress this gap by examining the rapproachment between the two schools of work. For example, we show that for many problems a bounded error estimation approach is precisely equivalent in terms of final result to the stochastic approach of Bayesian estimation. We also show that in spite of having the advantages of being simple and intuitive, bounded error estimation theory is demanding on the quantitative accuracy of prior information. In contrast, we discuss how the assumptions underlying stochastic estimation theory are more complex, but have the key feature that qualitative assumptions on the nature of a typical disturbance se...
Some System Identification Challenges and Approaches
"... Abstract: The field of controloriented system identification is mature. Nevertheless, it is still very active. This is because there are many important unsolved challenges. Of these, this paper considers a selection. This involves considering the estimation of general nonlinear model structures, to ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: The field of controloriented system identification is mature. Nevertheless, it is still very active. This is because there are many important unsolved challenges. Of these, this paper considers a selection. This involves considering the estimation of general nonlinear model structures, together with accurate error bounds, using methods that scale well to models of high dimension. A particular strength of the system identification field is that it has always actively sought to understand, embrace and develop ideas from other fields, such as statistics, mathematics and econometrics. This paper proposes a continuation of this successful strategy by proposing and profiling the adoption of new ideas originating in statistics, signal processing and statistical mechanics.
Towards a Combination of Interval and Ellipsoid Uncertainty
"... In many reallife situations, we do not know the probability distribution of measurement errors (∆x1,..., ∆xn), we only know the upper bounds ∆i on these errors. In such situations, once we know the measurement results ˜x1,..., ˜xn, we can only conclude that the actual (unknown) values of the quanti ..."
Abstract
 Add to MetaCart
(Show Context)
In many reallife situations, we do not know the probability distribution of measurement errors (∆x1,..., ∆xn), we only know the upper bounds ∆i on these errors. In such situations, once we know the measurement results ˜x1,..., ˜xn, we can only conclude that the actual (unknown) values of the quantity xi belongs to the interval xi = [˜xi − ∆i, ˜xi + ∆i]. Based on this interval uncertainty, we want to find the range of possible values of the desired quantity y = f(x1,..., xn). In general, computing this range is an NPhard problem, but in the linear approximation when n∑ f = ˜y + ci ∆xi, we have a linear time algorithm for computing the i=1 range. In other situations, we know the ellipsoid that contains the actual values (∆x1,..., ∆xn); in the reasonable case of “independent ” variables, n ∑ ∆x we have an ellipsoid E of the type 2 i ≤ r 2. In this case, we also have i=1 a linear time algorithm for computing the range of a linear function f. In some cases, however, we have a combination of interval and ellipsoid uncertainty. In this case, the actual values (∆x1,..., ∆xn) belong to the intersection of the box x1 ×...×xn and the ellipsoid. In general, estimating the range over the intersection enables us to get a narrower range for f. In this paper, we provide two algorithms for estimating the range of a linear function over an intersection in linear time: a simpler O(n log(n)) algorithm and a (somewhat more complex) linear time algorithm. Both algorithms can be extended to the l pcase, when instead of an ellipsoid we n ∑ ∆xi have a set p σ p ≤ r