Results 1  10
of
51
A trustregion approach to the regularization of largescale discrete forms of illposed problems
 SISC
, 2000
"... We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effe ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effect of the noise on the solution. We pose the regularization problem as a quadratically constrained least squares problem. This formulation is equivalent to Tikhonov regularization, and we note that it is also a special case of the trustregion subproblem from optimization. We analyze the trustregion subproblem in the regularization case, and we consider the nontrivial extensions of a recently developed method for general largescale subproblems that will allow us to handle this case. The method relies on matrixvector products only, has low and fixed storage requirements, and can handle the singularities arising in illposed problems. We present numerical results on test problems, on an
Regularized Total Least Squares Based on Quadratic Eigenvalue Problem Solvers
 BIT
, 2004
"... This paper presents a new computational approach for solving the Regularized Total Least Squares problem. The problem is formulated by adding a quadratic constraint to the Total Least Square minimization problem. Starting from the fact that a quadratically constrained Least Squares problem can be so ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
This paper presents a new computational approach for solving the Regularized Total Least Squares problem. The problem is formulated by adding a quadratic constraint to the Total Least Square minimization problem. Starting from the fact that a quadratically constrained Least Squares problem can be solved via a quadratic eigenvalue problem, an iterative procedure for solving the regularized Total Least Squares problem based on quadratic eigenvalue problems is presented. Discrete illposed problems are used as simulation examples in order to numerically validate the method.
Recycling Subspace Information for Diffuse Optical Tomography
 SIAM J. Sci. Comput
, 2004
"... We discuss the efficient solution of a large sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. In particular, we analyze a number of strategies for recycling Krylov subspace information for the most efficient solution. We reconstruct threedim ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We discuss the efficient solution of a large sequence of slowly varying linear systems arising in computations for diffuse optical tomographic imaging. In particular, we analyze a number of strategies for recycling Krylov subspace information for the most efficient solution. We reconstruct threedimensional...
NearOptimal Parameters for Tikhonov and Other Regularization Methods
, 2001
"... Choosing the regularization parameter for an illposed problem is an art based on good heuristics and prior knowledge of the noise in the observations. In this work, we propose choosing the parameter, without a priori information, by approximately minimizing the distance between the true solution to ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Choosing the regularization parameter for an illposed problem is an art based on good heuristics and prior knowledge of the noise in the observations. In this work, we propose choosing the parameter, without a priori information, by approximately minimizing the distance between the true solution to the discrete problem and the family of regularized solutions. We demonstrate the usefulness of this approach for Tikhonov regularization and for an alternate family of solutions. Further, we prove convergence of the regularization parameter to zero as the standard deviation of the noise goes to zero.
An interiorpoint trustregionbased method for largescale nonnegative regularization
 Inverse Problems
, 2002
"... Abstract We present a new method for solving largescale quadratic problems with quadratic and nonnegativity constraints. Such problems arise for example in the regularization of illposed problems in image restoration where, in addition, some of the matrices involved are very illconditioned. The n ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract We present a new method for solving largescale quadratic problems with quadratic and nonnegativity constraints. Such problems arise for example in the regularization of illposed problems in image restoration where, in addition, some of the matrices involved are very illconditioned. The new method uses recently developed techniques for the largescale trustregion subproblem.
On lanczos based methods for the regularization of discrete illposed problems.
 BIT,
, 2001
"... ..."
ArnoldiTikhonov regularization methods
, 2008
"... Tikhonov regularization for largescale linear illposed problems is commonly implemented by determining a partial Lanczos bidiagonalization of the matrix of the given system of equations. This paper explores the possibility of instead computing a partial Arnoldi decomposition of the given matrix. C ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Tikhonov regularization for largescale linear illposed problems is commonly implemented by determining a partial Lanczos bidiagonalization of the matrix of the given system of equations. This paper explores the possibility of instead computing a partial Arnoldi decomposition of the given matrix. Computed examples illustrate that this approach may require fewer matrixvector product evaluations and, therefore, less arithmetic work. Moreover, the proposed rangerestricted ArnoldiTikhonov regularization method does not require the adjoint matrix and, hence, is convenient to use for problems for which the adjoint is difficult to evaluate.
NOISE PROPAGATION IN REGULARIZING ITERATIONS FOR IMAGE DEBLURRING ∗
"... Abstract. We use the twodimensional discrete cosine transform to study how the noise from the data enters the reconstructed images computed by regularizing iterations, that is, Krylov subspace methods applied to discrete illposed problems. The regularization in these methods is obtained via the pr ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We use the twodimensional discrete cosine transform to study how the noise from the data enters the reconstructed images computed by regularizing iterations, that is, Krylov subspace methods applied to discrete illposed problems. The regularization in these methods is obtained via the projection onto the associated Krylov subspace. We focus on CGLS/LSQR, GMRES, and RRGMRES, as well as MINRES and MRII in the symmetric case. Our analysis shows that the noise enters primarily in the form of bandpass filtered white noise, which appears as “freckles ” in the reconstructions, and these artifacts are present in both the signal and the noise components of the solutions. We also show why GMRES and MINRES are not suited for image deblurring.
A WEIGHTEDGCV METHOD FOR LANCZOSHYBRID REGULARIZATION
, 2008
"... Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first incre ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first increases and then decreases. Hybrid methods apply a standard regularization technique, such as Tikhonov regularization, to the projected problem at each iteration. Thus, regularization in hybrid methods is achieved both by Krylov filtering and by appropriate choice of a regularization parameter at each iteration. In this paper we describe a weighted generalized cross validation (WGCV) method for choosing the parameter. Using this method we demonstrate that the semiconvergence behavior of the Lanczos method can be overcome, making the solution less sensitive to the number of iterations.