Results 11  20
of
403
Bayesian waveletbased image deconvolution: A GEM algorithm exploiting a class of heavytailed priors
 IEEE Trans. Image Process
, 2006
"... Abstract—Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The wellknown sparsity of the wavelet coefficients of realworld images is modeled by heavytailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of inf ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
Abstract—Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The wellknown sparsity of the wavelet coefficients of realworld images is modeled by heavytailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys, and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the “nonnegative garrote ” thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary secondorder iterative method. The result is a GEM algorithm of ( log) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to stateofthe art methods, demanding comparable (in some cases, much less) computational complexity. Index Terms—Bayesian, deconvolution, expectation maximization (EM), generalized expectation maximization (GEM), Gaussian scale mixtures (GSM), heavytailed priors, wavelet. I.
Adaptively Preconditioned Gmres Algorithms
 SIAM J. Sci. Comput
"... . The restarted GMRES algorithm proposed by Saad and Schultz [22] is one of the most popular iterative methods for the solution of large linear systems of equations Ax = b with a nonsymmetric and sparse matrix. This algorithm is particularly attractive when a good preconditioner is available. The pr ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
. The restarted GMRES algorithm proposed by Saad and Schultz [22] is one of the most popular iterative methods for the solution of large linear systems of equations Ax = b with a nonsymmetric and sparse matrix. This algorithm is particularly attractive when a good preconditioner is available. The present paper describes two new methods for determining preconditioners from spectral information gathered by the Arnoldi process during iterations by the restarted GMRES algorithm. These methods seek to determine an invariant subspace of the matrix A associated with eigenvalues close to the origin, and move these eigenvalues so that a higher rate of convergence of the iterative methods is achieved. Key words. iterative method, nonsymmetric linear system, Arnoldi process AMS subject classifications. 65F10 1. Introduction. Many problems in Applied Mathematics and Engineering give rise to very large linear systems of equations Ax = b; A 2 R n\Thetan ; x; b 2 R n ; (1.1) with a sparse nons...
A Computationally Efficient Superresolution Image Reconstruction Algorithm
, 2000
"... Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propo ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonovregularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized crossvalidation method for automatic calculation of regularization parameters. Effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.
ConjugateGradient Preconditioning Methods for ShiftVariant PET Image Reconstruction
 IEEE Tr. Im. Proc
, 2002
"... Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian mat ..."
Abstract

Cited by 51 (21 self)
 Add to MetaCart
Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shiftinvariant, i.e. for those with approximately blockToeplitz or blockcirculant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantumlimited optical imaging, the Hessian of the weighted leastsquares objective function is quite shiftvariant, and circulant preconditioners perform poorly. Additional shiftvariance is caused by edgepreserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shiftvariant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugategradient (CG) iteration. We also propose a new efficient method for the linesearch step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
Robust approximate inverse preconditioning for the conjugate gradient method
 SIAM J. SCI. COMPUT
, 2000
"... We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illcondit ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illconditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based on the idea of diagonally compensated reduction of matrix entries. The results of numerical tests on challenging linear systems arising from finite element modeling of elasticity and diffusion problems are presented.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
ARMS: An Algebraic Recursive Multilevel Solver for general sparse linear systems
 Numer. Linear Alg. Appl
, 1999
"... This paper presents a general preconditioning method based on a multilevel partial solution approach. The basic step in constructing the preconditioner is to separate the initial points into two subsets. The first subset which can be termed "coarse" is obtained by using "block" independent sets, ..."
Abstract

Cited by 46 (24 self)
 Add to MetaCart
This paper presents a general preconditioning method based on a multilevel partial solution approach. The basic step in constructing the preconditioner is to separate the initial points into two subsets. The first subset which can be termed "coarse" is obtained by using "block" independent sets, or "aggregates". Two aggregates have no coupling between them, but nodes in the same aggregate may be coupled. The nodes not in the coarse set are part of what might be called the "Fringe" set. The idea of the methods is to form the Schur complement related to the fringe set. This leads to a natural block LU factorization which can be used as a preconditioner for the system. This system is then solver recursively using as preconditioner the factorization that could be obtained from the next level. Unlike other multilevel preconditioners available, iterations between levels are allowed. One interesting aspect of the method is that it provides a common framework for many other technique...
Fast image recovery using variable splitting and constrained optimization
 IEEE Trans. Image Process
, 2010
"... Abstract—We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an `2 datafidelity term and a nonsmooth regularizer. This formulation allows both wavele ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
Abstract—We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an `2 datafidelity term and a nonsmooth regularizer. This formulation allows both waveletbased (with orthogonal or framebased representations) regularization or totalvariation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the socalled alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods. Index Terms—Augmented Lagrangian, compressive sensing, convex optimization, image reconstruction, image restoration,
Sparse Approximate Inverse Preconditioning For Dense Linear Systems Arising In Computational Electromagnetics
 Numerical Algorithms
, 1997
"... . We investigate the use of sparse approximate inverse preconditioners for the iterative solution of linear systems with dense complex coefficient matrices arising from industrial electromagnetic problems. An approximate inverse is computed via a Frobenius norm approach with a prescribed nonzero pat ..."
Abstract

Cited by 44 (19 self)
 Add to MetaCart
. We investigate the use of sparse approximate inverse preconditioners for the iterative solution of linear systems with dense complex coefficient matrices arising from industrial electromagnetic problems. An approximate inverse is computed via a Frobenius norm approach with a prescribed nonzero pattern. Some strategies for determining the nonzero pattern of an approximate inverse are described. The results of numerical experiments suggest that sparse approximate inverse preconditioning is a viable approach for the solution of largescale dense linear systems on parallel computers. Key words. Dense linear systems, preconditioning, sparse approximate inverses, complex symmetric matrices, scattering calculations, Krylov subspace methods, parallel computing. AMS(MOS) subject classification. 65F10, 65F50, 65R20, 65N38, 7808, 78A50, 78A55. 1. Introduction. In the last decade, a significant amount of effort has been spent on the simulation of electromagnetic wave propagation phenomena to ad...
Approximate Inverse Techniques for BlockPartitioned Matrices
 SIAM J. Sci. Comput
, 1995
"... This paper proposes some preconditioning options when the system matrix is in blockpartitioned form. This form may arise naturally, for example from the incompressible NavierStokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to g ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
This paper proposes some preconditioning options when the system matrix is in blockpartitioned form. This form may arise naturally, for example from the incompressible NavierStokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to generate sparse approximate solutions whenever these are needed in forming the preconditioner. The storage requirements for these preconditioners may be much less than for ILU preconditioners for tough, largescale CFD problems. The numerical experiments reported show that these preconditioners can help us solve difficult linear systems whose coefficient matrices are highly indefinite. 1 Introduction Consider the block partitioning of a matrix A, in the form A = ` B F E C ' (1) where the blocking naturally occurs due the ordering of the equations and the variables. Matrices of this form arise in many applications, such as in the incompressible NavierStokes equations, where the sc...