Results 1  10
of
75
Robust Solutions To LeastSquares Problems With Uncertain Data
, 1997
"... . We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpret ..."
Abstract

Cited by 146 (12 self)
 Add to MetaCart
. We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomialtime using semidefinite programming (SDP). We also consider the case when A; b are rational functions of an unknownbutbounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worstcase residual. We provide numerical examples, including one from robust identification and one from robust interpolation. Key Words. Leastsquares, uncertainty, robustness, secondorder cone...
Solving illconditioned and singular linear systems: A tutorial on regularization
 SIAM Rev
, 1998
"... Abstract. It is shown that the basic regularization procedures for finding meaningful approximate solutions of illconditioned or singular linear systems can be phrased and analyzed in terms of classical linear algebra that can be taught in any numerical analysis course. Apart from rewriting many kn ..."
Abstract

Cited by 83 (2 self)
 Add to MetaCart
Abstract. It is shown that the basic regularization procedures for finding meaningful approximate solutions of illconditioned or singular linear systems can be phrased and analyzed in terms of classical linear algebra that can be taught in any numerical analysis course. Apart from rewriting many known results in a more elegant form, we also derive a new twoparameter family of merit functions for the determination of the regularization parameter. The traditional merit functions from generalized cross validation (GCV) and generalized maximum likelihood (GML) are recovered as special cases.
TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES
 SIAM J. MATRIX ANAL. APPL
, 1999
"... Discretizations of inverse problems lead to systems of linear equations with a highly illconditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. We show how Tikhonov’s regularization method, which in its original for ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
Discretizations of inverse problems lead to systems of linear equations with a highly illconditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. We show how Tikhonov’s regularization method, which in its original formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the righthand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that, in certain cases with large perturbations, the new method is superior to standard regularization methods.
A Computationally Efficient Superresolution Image Reconstruction Algorithm
, 2000
"... Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propo ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonovregularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized crossvalidation method for automatic calculation of regularization parameters. Effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.
Regularization by truncated total least squares
 SIAM J. Sci. Comp
, 1997
"... Abstract. The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the righthand side are not precisely known. This paper focuses on the ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
Abstract. The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the righthand side are not precisely known. This paper focuses on the use of TLS for solving problems with very illconditioned coefficient matrices whose singular values decay gradually (socalled discrete illposed problems), where some regularization is necessary to stabilize the computed solution. We filter the solution by truncating the small singular values of the TLS matrix. We express our results in terms of the singular value decomposition (SVD) of the coefficient matrix rather than the augmented matrix. This leads to insight into the filtering properties of the truncated TLS method as compared to regularized least squares solutions. In addition, we propose and test an iterative algorithm based on Lanczos bidiagonalization for computing truncated TLS solutions.
Choosing regularization parameters in iterative methods for illposed problems
 SIAM J. MATRIX ANAL. APPL
, 2001
"... Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present a common framework for efficient algorithms that regularize after this second projection rather than before it. We show that determining regularization parameters based on the final projectedproblem rather than on the original discretization has firmer justification andoften involves less computational expense. We prove some results on the approximate equivalence of this approach to other forms of regularization, andwe present numerical examples.
NonConvergence of the LCurve Regularization Parameter Selection Method
 Inverse Problems
, 1997
"... The Lcurve method was developed for the selection of regularization parameters in the solution of discrete systems obtained from illposed problems. An analysis of this method is given for selecting a parameter for Tikhonov regularization. This analysis, which is carried out in a semidiscrete, sem ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
The Lcurve method was developed for the selection of regularization parameters in the solution of discrete systems obtained from illposed problems. An analysis of this method is given for selecting a parameter for Tikhonov regularization. This analysis, which is carried out in a semidiscrete, semistochastic setting, shows that the Lcurve approach yields regularized solutions which fail to converge for a certain class of problems. A numerical example is also presented which indicates that this lack of convergence can arise in practical applications.
Tikhonov Regularization for Large Scale Problems
, 1997
"... Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale pro ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale problems are still missing. In this paper approximation techniques based on the Lanczos algorithm and the theory of Gauss quadrature are proposed to reduce the computational complexity for large scale problems. The new approach is applied to 5 different heuristics: Morozov's discrepancy principle, the Gfrerer/Rausmethod, the quasioptimality criterion, generalized crossvalidation, and the Lcurve criterion. Numerical experiments are used to determine the efficiency and robustness of the various methods.
Fast CGBased Methods for TikhonovPhillips Regularization
, 1997
"... TikhonovPhillips regularization is one of the bestknown regularization methods for inverse problems. A posteriori criteria for determining the regularization parameter ff require solving (A A+ ffI)x = A y ffi () for different values of ff. We investigate two methods for accelerating the standard CG ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
TikhonovPhillips regularization is one of the bestknown regularization methods for inverse problems. A posteriori criteria for determining the regularization parameter ff require solving (A A+ ffI)x = A y ffi () for different values of ff. We investigate two methods for accelerating the standard CGalgorithm for solving the family of systems (). The first one utilizes a stopping criterion for the CGiterations which depends on ff and ffi. The second method exploits the shifted structure of the linear systems (), which allows to solve () simultaneously for different values of ff. We present numerical experiments for three test problems which illustrate the practical efficiency of the new methods. The experiments as well as theoretical considerations show that run times are accelerated by a factor of at least 3.
A trustregion approach to the regularization of largescale discrete forms of illposed problems
 SISC
, 2000
"... We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effe ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effect of the noise on the solution. We pose the regularization problem as a quadratically constrained least squares problem. This formulation is equivalent to Tikhonov regularization, and we note that it is also a special case of the trustregion subproblem from optimization. We analyze the trustregion subproblem in the regularization case, and we consider the nontrivial extensions of a recently developed method for general largescale subproblems that will allow us to handle this case. The method relies on matrixvector products only, has low and fixed storage requirements, and can handle the singularities arising in illposed problems. We present numerical results on test problems, on an