Results 11 - 20
of
258
Analysis versus synthesis in signal priors
, 2005
"... The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysis-based and synthesis-based priors. Analysis-based priors assign probability to a signal through various forward measu ..."
Abstract
-
Cited by 147 (16 self)
- Add to MetaCart
The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysis-based and synthesis-based priors. Analysis-based priors assign probability to a signal through various forward measurements of it, while synthesisbased priors seek a reconstruction of the signal as a combination of atom signals. In this paper we describe these two prior classes, focusing on the distinction between them. We show that although when reducing to the complete and under-complete formulations the two become equivalent, in their more interesting overcomplete formulation the two types depart. Focusing on the ℓ1 denoising case, we present several ways of comparing the two types of priors, establishing the existence of an unbridgeable gap between them. 1.
Random Cascades on Wavelet Trees and Their Use in Analyzing and Modeling Natural Images
- Applied and Computational Harmonic Analysis
, 2001
"... in signal and image processing, including image denoising, coding, and super-resolution. # 2001 Academic Press 1. INTRODUCTION Stochastic models of natural images underlie a variety of applications in image processing and low-level computer vision, including image coding, denoising and 1 MW supp ..."
Abstract
-
Cited by 98 (15 self)
- Add to MetaCart
(Show Context)
in signal and image processing, including image denoising, coding, and super-resolution. # 2001 Academic Press 1. INTRODUCTION Stochastic models of natural images underlie a variety of applications in image processing and low-level computer vision, including image coding, denoising and 1 MW supported by NSERC 1967 fellowship; AW and MW by AFOSR Grant F49620-98-1-0349 and ONR Grant N00014-91-J-1004. Address correspondence to MW. 2 ES supported by NSF Career Grant MIP-9796040 and an Alfred P. Sloan fellowship. 89 1063-5203/01 $35.00 Copyright # 2001 by Academic Press All rights of reproduction in any form reserved. 90 WAINWRIGHT, SIMONCELLI, AND WILLSKY restoration, interpolation and synthesis. Accordingly, the past decade has witnessed an increasing amount of research devoted to developing stochastic models of images (e.g., [19, 38, 45, 48, 55]). Simultaneously, wavel
On the Equivalence of Soft Wavelet Shrinkage, Total Variation Diffusion, Total Variation Regularization, and SIDEs
- SIAM J. NUMER. ANAL
, 2004
"... Soft wavelet shrinkage, total variation (TV) diffusion, TV regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the one-dimen ..."
Abstract
-
Cited by 89 (18 self)
- Add to MetaCart
(Show Context)
Soft wavelet shrinkage, total variation (TV) diffusion, TV regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the one-dimensional case. First, we prove that Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularization of two-pixel pairs. In the translationally invariant case we show that applying cycle spinning to Haar wavelet shrinkage on a single scale can be regarded as an absolutely stable explicit discretization of TV diffusion. We prove that space-discrete TV diffusion and TV regularization are identical and that they are also equivalent to the SIDEs system when a specific force function is chosen. Afterwards, we show that wavelet shrinkage on multiple scales can be regarded as a single step diffusion filtering or regularization of the Laplacian pyramid of the signal. We analyze possibilities to avoid Gibbs-like artifacts for multiscale Haar wavelet shrinkage by scaling the thresholds. Finally, we present experiments where hybrid methods are designed that combine the advantages of wavelets and PDE/variational approaches. These methods are based on iterated shift-invariant wavelet shrinkage at multiple scales with scaled thresholds.
Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing
- SIAM J. IMAGING SCI
, 2008
"... We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 insta ..."
Abstract
-
Cited by 84 (15 self)
- Add to MetaCart
(Show Context)
We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
- THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS
, 2004
"... Regularization of ill-posed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implem ..."
Abstract
-
Cited by 79 (11 self)
- Add to MetaCart
(Show Context)
Regularization of ill-posed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to ℓ1-constraints, using a gradient method, with projection on ℓ1-balls. The corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.
Incorporating Information on Neighboring Coefficients into Wavelet Estimation
, 1999
"... In standard wavelet methods, the empirical wavelet coefficients are thresholded term by term, on the basis of their individual magnitudes. Information on other coefficients has no influence on the treatment of particular coefficients. We propose a wavelet shrinkage method that incorporates informati ..."
Abstract
-
Cited by 75 (11 self)
- Add to MetaCart
In standard wavelet methods, the empirical wavelet coefficients are thresholded term by term, on the basis of their individual magnitudes. Information on other coefficients has no influence on the treatment of particular coefficients. We propose a wavelet shrinkage method that incorporates information on neighboring coefficients into the decision making. The coefficients are considered in overlapping blocks; the treatment of coefficients in the middle of each block depends on the data in the whole block. The asymptotic and numerical performances of two particular versions of the estimator are investigated. We show that, asymptotically, one version of the estimator achieves the exact optimal rates of convergence over a range of Besov classes for global estimation, and attains adaptive minimax rate for estimating functions at a point. In numerical comparisons with various methods, both versions of the estimator perform excellently.
Image denoising via learned dictionaries and sparse representation
- In CVPR
, 2006
"... We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously tr ..."
Abstract
-
Cited by 71 (8 self)
- Add to MetaCart
(Show Context)
We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously trainining a dictionary on its (corrupted) content using the K-SVD algorithm. As the dictionary training algorithm is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm, with state-of-the-art performance, equivalent and sometimes surpassing recently published leading alternative denoising methods. 1.
FIXED-POINT CONTINUATION FOR ℓ1-MINIMIZATION: METHODOLOGY AND CONVERGENCE
"... We present a framework for solving large-scale ℓ1-regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar µ; continuation refers ..."
Abstract
-
Cited by 68 (10 self)
- Add to MetaCart
We present a framework for solving large-scale ℓ1-regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar µ; continuation refers to approximately following the path traced by the optimal value of x as µ increases. In this paper, we study the structure of optimal solution sets; prove finite convergence for important quantities; and establish q-linear convergence rates for the fixed-point algorithm applied to problems with f(x) convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.
Proximal thresholding algorithm for minimization over orthonormal bases
- SIAM Journal on Optimization
, 2007
"... The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and inve ..."
Abstract
-
Cited by 62 (14 self)
- Add to MetaCart
(Show Context)
The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing in particular several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated. 1 Problem formulation Throughout this paper, H is a separable infinite-dimensional real Hilbert space with scalar product 〈 · | ·〉, norm ‖·‖, and distance d. Moreover, Γ0(H) denotes the class of proper lower semicontinuous convex functions from H to]−∞, +∞], and (ek)k∈N is an orthonormal basis of H. The standard denoising problem in signal theory consists of recovering the original form of a signal x ∈ H from an observation z = x + v, where v ∈ H is the realization of a noise process. In many instances, x is known to admit a sparse representation with respect to (ek)k∈N and an estimate x of x can be constructed by removing the coefficients of smallest magnitude in the 1 representation (〈z | ek〉)k∈N of z with respect to (ek)k∈N. A popular method consists of performing a so-called soft thresholding of each coefficient 〈z | ek 〉 at some predetermined level ωk ∈]0, +∞[, namely (see Fig. 1) (1.1) x = ∑ soft [−ωk,ωk] (〈z | ek〉)ek, where soft [−ωk,ωk] : ξ ↦ → sign(ξ) max{|ξ | − ωk, 0}. k∈N This approach has received considerable attention in various areas of applied mathematics ranging from nonlinear approximation theory to statistics, and from harmonic analysis to image processing;
Iterative thresholding algorithms
- in Preprint, 2007. [Online]. Available : http ://www.dsp.ece.rice.edu/cs
"... This article provides a variational formulation for hard and firm thresholding. A related functional can be used to regularize inverse problems by sparsity constraints. We show that a damped hard or firm thresholded Landweber iteration converges to its minimizer. This provides an alternative to an a ..."
Abstract
-
Cited by 53 (10 self)
- Add to MetaCart
(Show Context)
This article provides a variational formulation for hard and firm thresholding. A related functional can be used to regularize inverse problems by sparsity constraints. We show that a damped hard or firm thresholded Landweber iteration converges to its minimizer. This provides an alternative to an algorithm recently studied by the authors. We prove stability of minimizers with respect to the parameters of the functional and its regularization properties by means of Γ-convergence. All investigations are done in the general setting of vector-valued (multi-channel) data.