Results 1  10
of
136
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 617 (31 self)
 Add to MetaCart
. In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variation in the perturbed quantity. Up to the higherorder terms that are ignored in the expansion, these statistics tend to be more realistic than perturbation bounds obtained in terms of norms. The technique is applied to a number of problems in matrix perturbation theory, including least squares and the eigenvalue problem. Key words. perturbation theory, random matrix, linear system, least squares, eigenvalue, eigenvector, invariant subspace, singular value AMS(MOS) subject classifications. 15A06, 15A12, 15A18, 15A52, 15A60 1. Introduction. Let A be a matrix and let F be a matrix valued function of A. Two principal problems of matrix perturbation theory are the following. Given a matrix E, pr...
Robust Solutions To LeastSquares Problems With Uncertain Data
, 1997
"... . We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpret ..."
Abstract

Cited by 149 (13 self)
 Add to MetaCart
. We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomialtime using semidefinite programming (SDP). We also consider the case when A; b are rational functions of an unknownbutbounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worstcase residual. We provide numerical examples, including one from robust identification and one from robust interpolation. Key Words. Leastsquares, uncertainty, robustness, secondorder cone...
Robust computation of optic flow in a multiscale differential framework
 International Journal of Computer Vision
, 1995
"... Abstract. We have developed a new algorithm for computing optical flow in a differential framework. The image sequence is first convolved with a set of linear, separable spatiotemporal filter kernels similar to those that have been used in other early vision problems such as texture and stereopsis. ..."
Abstract

Cited by 96 (2 self)
 Add to MetaCart
Abstract. We have developed a new algorithm for computing optical flow in a differential framework. The image sequence is first convolved with a set of linear, separable spatiotemporal filter kernels similar to those that have been used in other early vision problems such as texture and stereopsis. The brightness constancy constraint can then be applied to each of the resulting images, giving us, in general, an overdetermined system of equations for the optical flow at each pixel. There are three principal sources of error: (a) stochastic error due to sensor noise (b) systematic errors in the presence of large displacements and (c) errors due to failure of the brightness constancy model. Our analysis of these errors leads us to develop an algorithm based on a robust version of total least squares. Each optical flow vector computed has an associated reliability measure which can be used in subsequent processing. The performance of the algorithm on the data set used by Barron et al. (IJCV 1994) compares favorably with other techniques. In addition to being separable, the filters used are also causal, incorporating only past time frames. The algorithm is fully parallel and has been implemented on a multiple processor machine. 1
Spatial resolution Enhancement of LowResolution . . .
, 1998
"... Recent years have seen growing interest in the problem of superresolution restoration of video sequences. Whereas in the traditional single image restoration problem only a single input image is available for processing, the task of reconstructing superresolution images from multiple undersampled ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
Recent years have seen growing interest in the problem of superresolution restoration of video sequences. Whereas in the traditional single image restoration problem only a single input image is available for processing, the task of reconstructing superresolution images from multiple undersampled and degraded images can take advantage of the additional spatiotemporal data available in the image sequence. In particular, camera and scene motion lead to frames in the source video sequence containing similar, but not identical information. The additional information available in these frames make possible reconstruction of visually superior frames at higher resolution than that of the original data. In this paper we review the current state of the art and identify promising directions for future research.
Least squares 3D surface and curve matching
 ISPRS Journal of Photogrammetry and Remote Sensing
, 2005
"... The automatic coregistration of point clouds, representing 3D surfaces, is a relevant problem in 3D modeling. This multiple registration problem can be defined as a surface matching task. We treat it as least squares matching of overlapping surfaces. The surface may have been digitized/sampled poin ..."
Abstract

Cited by 60 (13 self)
 Add to MetaCart
The automatic coregistration of point clouds, representing 3D surfaces, is a relevant problem in 3D modeling. This multiple registration problem can be defined as a surface matching task. We treat it as least squares matching of overlapping surfaces. The surface may have been digitized/sampled point by point using a laser scanner device, a photogrammetric method or other surface measurement techniques. Our proposed method estimates the transformation parameters of one or more 3D search surfaces with respect to a 3D template surface, using the Generalized GaussMarkoff model, minimizing the sum of squares of the Euclidean distances between the surfaces. This formulation gives the opportunity of matching arbitrarily oriented 3D surface patches. It fully considers 3D geometry. Besides the mathematical model and execution aspects we address the further extensions of the basic model. We also show how this method can be used for curve matching in 3D space and matching of curves to surfaces. Some practical examples based on the registration of closerange laser scanner and photogrammetric point clouds are presented for the demonstration of the method. This surface matching technique is a generalization of the least squares image matching concept and offers high flexibility for any kind of 3D surface correspondence problem, as well as statistical tools for the analysis of the quality of final matching results.
TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES
 SIAM J. MATRIX ANAL. APPL
, 1999
"... Discretizations of inverse problems lead to systems of linear equations with a highly illconditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. We show how Tikhonov’s regularization method, which in its original for ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
Discretizations of inverse problems lead to systems of linear equations with a highly illconditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. We show how Tikhonov’s regularization method, which in its original formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the righthand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that, in certain cases with large perturbations, the new method is superior to standard regularization methods.
Robust meansquared error estimation in the presence of model uncertainties
 IEEE Trans. on Signal Processing
, 2005
"... Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the wor ..."
Abstract

Cited by 52 (37 self)
 Add to MetaCart
Abstract—We consider the problem of estimating an unknown parameter vector x in a linear model that may be subject to uncertainties, where the vector x is known to satisfy a weighted norm constraint. We first assume that the model is known exactly and seek the linear estimator that minimizes the worstcase meansquared error (MSE) across all possible values of x. We show that for an arbitrary choice of weighting, the optimal minimax MSE estimator can be formulated as a solution to a semidefinite programming problem (SDP), which can be solved very efficiently. We then develop a closed form expression for the minimax MSE estimator for a broad class of weighting matrices and show that it coincides with the shrunken estimator of Mayer and Willke, with a specific choice of shrinkage factor that explicitly takes the prior information into account. Next, we consider the case in which the model matrix is subject to uncertainties and seek the robust linear estimator that minimizes the worstcase MSE across all possible values of x and all possible values of the model matrix. As we show, the robust minimax MSE estimator can also be formulated as a solution to an SDP. Finally, we demonstrate through several examples that the minimax MSE estimator can significantly increase the performance over the conventional leastsquares estimator, and when the model matrix is subject to uncertainties, the robust minimax MSE estimator can lead to a considerable improvement in performance over the minimax MSE estimator. Index Terms—Data uncertainty, linear estimation, mean squared error estimation, minimax estimation, robust estimation. I.
Parameter Estimation In The Presence Of Bounded Data Uncertainties
 SIAM J. Matrix Anal. Appl
, 1998
"... . We formulate and solve a new parameter estimation problem in the presence of data uncertainties. The new method is suitable when apriori bounds on the uncertain data are available, and its solution leads to more meaningful results especially when compared with other methods such as total leastsq ..."
Abstract

Cited by 43 (7 self)
 Add to MetaCart
. We formulate and solve a new parameter estimation problem in the presence of data uncertainties. The new method is suitable when apriori bounds on the uncertain data are available, and its solution leads to more meaningful results especially when compared with other methods such as total leastsquares and robust estimation. Its superior performance is due to the fact that the new method guarantees that the effect of the uncertainties will never be unnecessarily overestimated, beyond what is reasonably assumed by the apriori bounds. A geometric interpretation of the solution is provided, along with a closed form expression for it. We also consider the case in which only selected columns of the coefficient matrix are subject to perturbations. Key words. Leastsquares estimation, regularized leastsquares, ridge regression, total leastsquares, robust estimation, modeling errors, secular equation. AMS subject classifications. 15A06, 65F05, 65F10, 65F35, 65K10, 93C41, 93E10, 93E24 1....
Regularization by truncated total least squares
 SIAM J. Sci. Comp
, 1997
"... Abstract. The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the righthand side are not precisely known. This paper focuses on the ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
Abstract. The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the righthand side are not precisely known. This paper focuses on the use of TLS for solving problems with very illconditioned coefficient matrices whose singular values decay gradually (socalled discrete illposed problems), where some regularization is necessary to stabilize the computed solution. We filter the solution by truncating the small singular values of the TLS matrix. We express our results in terms of the singular value decomposition (SVD) of the coefficient matrix rather than the augmented matrix. This leads to insight into the filtering properties of the truncated TLS method as compared to regularized least squares solutions. In addition, we propose and test an iterative algorithm based on Lanczos bidiagonalization for computing truncated TLS solutions.
Scalable Extrinsic Calibration of OmniDirectional Image Networks
 International Journal of Computer Vision
, 2002
"... We describe a lineartime algorithm that recovers absolute camera orientations and positions, along with uncertainty estimates, for networks of terrestrial image nodes spanning hundreds of meters in outdoor urban scenes. The algorithm produces pose estimates globally consistent to roughly 0.1 # (2 ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
We describe a lineartime algorithm that recovers absolute camera orientations and positions, along with uncertainty estimates, for networks of terrestrial image nodes spanning hundreds of meters in outdoor urban scenes. The algorithm produces pose estimates globally consistent to roughly 0.1 # (2 milliradians) and 5 centimeters on average, or about four pixels of epipolar alignment.