Results 1 
9 of
9
A trustregion approach to the regularization of largescale discrete forms of illposed problems
 SISC
, 2000
"... We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effe ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effect of the noise on the solution. We pose the regularization problem as a quadratically constrained least squares problem. This formulation is equivalent to Tikhonov regularization, and we note that it is also a special case of the trustregion subproblem from optimization. We analyze the trustregion subproblem in the regularization case, and we consider the nontrivial extensions of a recently developed method for general largescale subproblems that will allow us to handle this case. The method relies on matrixvector products only, has low and fixed storage requirements, and can handle the singularities arising in illposed problems. We present numerical results on test problems, on an
Resolution properties of regularized image reconstruction methods
 of EECS, Univ. of Michigan, Ann Arbor, MI
, 1995
"... This paper examines the spatial resolution properties of penalizedlikelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce spacevariant local impulse response functions, even for spaceinvariant tomographic s ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
This paper examines the spatial resolution properties of penalizedlikelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce spacevariant local impulse response functions, even for spaceinvariant tomographic systems. Paradoxically, for emission image reconstruction the local resolution is generally poorest in highcount regions. We show that the linearized local impulse response induced by quadratic roughness penalties depends on the object only through its projections. This analysis leads naturally to a modified regularization penalty that yields reconstructed images with nearly uniform resolution. The modified penalty also provides a very practical method for choosing the regularization parameter to obtain a specified resolution in images reconstructed by penalizedlikelihood methods.
A LargeScale TrustRegion Approach to the Regularization of Discrete IllPosed Problems
 RICE UNIVERSITY
, 1998
"... We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares pro ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares problem and develop a method for the solution of such problem. Our method does not require factorization of the coefficient matrix, it has very low storage requirements and handles the high degree of singularities arising in discrete illposed problems. We present numerical results on test problems and an application of the method to a practical problem with real data.
Optimization and Regularization of Nonlinear Least Squares Problems
, 1996
"... An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squa ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squares problem. This thesis mainly contributes to the development of tools, techniques, and theories for nonlinear least squares problems that lack a welldefined solution. Specifically, the intention is to generalize regularization methods for linear inverse problems to also handle nonlinear inverse problems. The investigation started by considering an exactly rankdeficient problem, i.e., a problem with a dependency among the parameters. It turns out that such a problem can be formulated as a nonlinear minimum norm problem. To solve this optimization problem two regularization methods are proposed: A GaussNewton Tikhonov regularized method and a minimum norm GaussNewton method. It is shown t...
Regularization Methods for Nonlinear Least Squares Problems. Part I: Exactly Rankdeficiency
, 1998
"... An optimization problem that does not have an unique local minimum is often very difficult to solve. For a nonlinear least squares problem this is the case when the Jacobian is rank deficient in a neighborhood of a local minimum. Moreover, a GaussNewton method such as LevenbergMarquardt will have ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
An optimization problem that does not have an unique local minimum is often very difficult to solve. For a nonlinear least squares problem this is the case when the Jacobian is rank deficient in a neighborhood of a local minimum. Moreover, a GaussNewton method such as LevenbergMarquardt will have very slow convergence for such a problem. We analyze these problems where the Jacobian is rank deficient and suggest other problem formulations more suitable for GaussNewton methods. The two methods we propose are a truncated GaussNewton method and a GaussNewton method based on the Tikhonov regularized nonlinear least squares problem. We test the methods on artificial problems where the rank of the Jacobian and the nonlinearity of the problem may be chosen making it possible to show the different features of the problem and the methods. The conclusion from the analysis and the tests is that the two methods have similar local convergence properties. The method based on Tikhononv regulari...
Regularization of IllPosed Problems by Envelope Guided Conjugate Gradients
, 1997
"... We propose a new way to iteratively solve large scale illposed problems by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov Lcurve and its corner. Monitoring the change of the approximate Lcurves allows us ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We propose a new way to iteratively solve large scale illposed problems by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov Lcurve and its corner. Monitoring the change of the approximate Lcurves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a low number of iterations. We apply the technique to an idealized image reconstruction problem in positron emission tomography. Keywords: Tikhonov regularization, multiobjective optimization, illposed, Lcurve, envelope, preconditioned conjugate gradients 1991 MSC Classification: primary 65F10, secondary 65R30, 90C29 1 Introduction Many problem in applied mathematics lead to models of the form F (x) = y + ffl; where x is an unknown vector of parameters, often restricted to a subset\Omega ae IR n (e.g., by nonnegativity con...
PET Regularization by Envelope Guided Conjugate Gradients
, 1996
"... We propose a new way to iteratively solve large scale illposed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We propose a new way to iteratively solve large scale illposed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov Lcurve and its corner. Monitoring the change of the approximate Lcurves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations.
A very fast Implementation of 2D Iterative Reconstruction Algorithms
, 1996
"... Introduction In the last 15 years the iterative reconstruction methods have gained much attention in the literature [1]. Several methods have been very prominent, such as EM (Expectation Maximization) [2, 3], ART (Algebraic Reconstruction Technique) [4], and LSCG (Least Squares Conjugate Gradient) ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Introduction In the last 15 years the iterative reconstruction methods have gained much attention in the literature [1]. Several methods have been very prominent, such as EM (Expectation Maximization) [2, 3], ART (Algebraic Reconstruction Technique) [4], and LSCG (Least Squares Conjugate Gradient) [5]. Each of the methods formulates the reconstruction problem as a linear set of equations b = Ax , b i = P J \Gamma1 j=1 a i;j x j , where b is an Idimensional vector containing the known sinogram values wrapped into a vector, and x is a Jdimensional vector containing the unknown image to be reconstructed. The matrix A is the system matrix, which contains the weight factors between each of the image pixels and each of the sinogram values, corresponding to line orientations. One problem i
Brain Mapping by Positron Emission Tomography
, 1996
"... In this paper the basis of PET (Positron Emission Tomography) is reviewed, and it is shown that the measured signals can be modelled as the Radon transform of the desired spatial distribution of, e.g., the brain activity. Next, two of the direct reconstruction methods are presented. Both are derived ..."
Abstract
 Add to MetaCart
In this paper the basis of PET (Positron Emission Tomography) is reviewed, and it is shown that the measured signals can be modelled as the Radon transform of the desired spatial distribution of, e.g., the brain activity. Next, two of the direct reconstruction methods are presented. Both are derived from inversion of the Radon transform. It is shown that the reconstruction can be based on filtering and integration techniques. Another major class of reconstruction techniques is presented, namely the linear algebra based methods, which often are formed as iterative methods. A very fast way of implementing a set of iterative reconstruction techniques is shown along with a set of examples. 1.1 Introduction to PET The PET scanner is based on radioactive tracers. A small dosage of a fi + emitter, such as O15 or C11, is injected into the patient (or the object to be scanned). The fi + emitter will be distributed in the tissue due to the blood circulation. If for instance the brain is t...