Results 1  10
of
91
Fast image recovery using variable splitting and constrained optimization
 IEEE Trans. Image Process
, 2010
"... Abstract—We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an `2 datafidelity term and a nonsmooth regularizer. This formulation allows both wavele ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
Abstract—We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an `2 datafidelity term and a nonsmooth regularizer. This formulation allows both waveletbased (with orthogonal or framebased representations) regularization or totalvariation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the socalled alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods. Index Terms—Augmented Lagrangian, compressive sensing, convex optimization, image reconstruction, image restoration,
An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems
 IEEE Trans. Image Process
, 2011
"... Abstract—We propose a new fast algorithm for solving one of the standard approaches to illposed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and con ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
Abstract—We propose a new fast algorithm for solving one of the standard approaches to illposed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of offtheshelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either totalvariation or waveletbased (or, more generally, framebased) regularization. The proposed algorithm is an instance of the socalled alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the stateoftheart. Index Terms—Convex optimization, frames, image reconstruction, image restoration, inpainting, totalvariation. A. Problem Formulation
Fast Image Deconvolution using HyperLaplacian
"... The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian ( p(x) ∝ e −kxα) , typically with 0.5 ≤ α ≤ 0.8. However, the use of spar ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian ( p(x) ∝ e −kxα) , typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem nonconvex and impractically slow to solve for multimegapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyperLaplacian priors. We adopt an alternating minimization scheme where one of the two phases is a nonconvex problem that is separable over pixels. This perpixel subproblem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1
TwoPhase Kernel Estimation for Robust Motion Deblurring
"... Abstract. We discuss a few new motion deblurring problems that are significant to kernel estimation and nonblind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the use ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
Abstract. We discuss a few new motion deblurring problems that are significant to kernel estimation and nonblind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the usefulness of image edges in motion deblurring and a gradient selection process to mitigate their possible adverse effect. We also propose an efficient and highquality kernel estimation method based on using the spatial prior and the iterative support detection (ISD) kernel refinement, which avoids hard threshold of the kernel elements to enforce sparsity. We employ the TVℓ1 deconvolution model, solved with a new variable substitution scheme to robustly suppress noise. 1
W.: An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise
 SIAM J. Sci. Comput
, 2009
"... We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anis ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anisotropic, and a data fidelity term measured in the L1norm. We derive the algorithm by applying the wellknown quadratic penalty function technique and prove attractive convergence properties including finite convergence for some variables and global qlinear convergence. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a lowcomplexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the efficiency of the algorithm. In addition, it is numerically compared to an algorithm recently proposed in [20] that uses a linear program and an interior point method for recovering grayscale images.
A fast algorithm for edgepreserving variational multichannel image restoration
"... Abstract. We generalize the alternating minimization algorithm recently proposed in [32] to efficiently solve a general, edgepreserving, variational model for recovering multichannel images degraded by within and crosschannel blurs, as well as additive Gaussian noise. This general model allows th ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
Abstract. We generalize the alternating minimization algorithm recently proposed in [32] to efficiently solve a general, edgepreserving, variational model for recovering multichannel images degraded by within and crosschannel blurs, as well as additive Gaussian noise. This general model allows the use of localized weights and higherorder derivatives in regularization, and includes a multichannel extension of total variation (MTV) regularization as a special case. In the MTV case, we show that the model can be derived from an extended halfquadratic transform of Geman and Yang [14]. For color images with three channels and when applied to the MTV model (either locally weighted or not), the periteration computational complexity of this algorithm is dominated by nine fast Fourier transforms. We establish strong convergence results for the algorithm including finite convergence for some variables and fast qlinear convergence for the others. Numerical results on various types of blurs are presented to demonstrate the performance of our algorithm compared to that of the MATLAB deblurring functions. We also present experimental results on regularization models using weighted MTV and higherorder derivatives to demonstrate improvements in image quality provided by these models over the plain MTV model.
Deconvolutional networks
 In CVPR
, 2010
"... Building robust low and midlevel image representations, beyond edge primitives, is a longstanding goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Building robust low and midlevel image representations, beyond edge primitives, is a longstanding goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features that capture these midlevel cues spontaneously emerge from image data. Our approach is based on the convolutional decomposition of images under a sparsity constraint and is totally unsupervised. By building a hierarchy of such decompositions we can learn rich feature sets that are a robust image representation for both the analysis and synthesis of images. 1.
Signal Restoration with Overcomplete Wavelet Transforms: Comparison of Analysis and Synthesis Priors
"... The variational approach to signal restoration calls for the minimization of a cost function that is the sum of a data fidelity term and a regularization term, the latter term constituting a ‘prior’. A synthesis prior represents the sought signal as a weighted sum of ‘atoms’. On the other hand, an a ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
The variational approach to signal restoration calls for the minimization of a cost function that is the sum of a data fidelity term and a regularization term, the latter term constituting a ‘prior’. A synthesis prior represents the sought signal as a weighted sum of ‘atoms’. On the other hand, an analysis prior models the coefficients obtained by applying the forward transform to the signal. For orthonormal transforms, the synthesis prior and analysis prior are equivalent; however, for overcomplete transforms the two formulations are different. We compare analysis and synthesis ℓ1norm regularization with overcomplete transforms for denoising and deconvolution.
Motion Detail Preserving Optical Flow Estimation
"... We discuss the cause of a severe optical flow estimation problem that fine motion structures cannot always be correctly reconstructed in the commonly employed multiscale variational framework. Our major finding is that significant and abrupt displacement transition wrecks smallscale motion structur ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We discuss the cause of a severe optical flow estimation problem that fine motion structures cannot always be correctly reconstructed in the commonly employed multiscale variational framework. Our major finding is that significant and abrupt displacement transition wrecks smallscale motion structures in the coarsetofine refinement. A novel optical flow estimation method is proposed in this paper to address this issue, which reduces the reliance of the flow estimates on their initial values propagated from the coarser level and enables recovering many motion details in each scale. The contribution of this paper also includes adaption of the objective function and development of a new optimization procedure. The effectiveness of our method is borne out by experiments for both large and smalldisplacement optical flow estimation.
Alternating direction augmented Lagrangian methods for semidefinite programming
, 2009
"... Abstract. We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a twosplitting scheme, minimizes the dual augmented Lagrangian function sequentially ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Abstract. We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a twosplitting scheme, minimizes the dual augmented Lagrangian function sequentially with respect to the Lagrange multipliers corresponding to the linear constraints, then the dual slack variables and finally the primal variables, while in each minimization keeping the other variables fixed. Convergence is proved by using a fixedpoint argument. A multiplesplitting algorithm is then proposed to handle SDPs with inequality constraints and positivity constraints directly without transforming them to the equality constraints in standard form. Finally, numerical results for frequency assignment, maximum stable set and binary integer quadratic programming problems are presented to demonstrate the robustness and efficiency of our algorithm.