• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Non-linear wavelet image processing: Variational problems, compression and noise removal through wavelet shrinkage (1998)

by A CHAMBOLLE, R DEVORE, N Y LEE, B LUCIER
Venue:IEEE Tran. Image Proc
Add To MetaCart

Tools

Sorted by:
Results 11 - 20 of 258
Next 10 →

Analysis versus synthesis in signal priors

by Michael Elad, Peyman Milanfar, Ron Rubinstein , 2005
"... The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysis-based and synthesis-based priors. Analysis-based priors assign probability to a signal through various forward measu ..."
Abstract - Cited by 147 (16 self) - Add to MetaCart
The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysis-based and synthesis-based priors. Analysis-based priors assign probability to a signal through various forward measurements of it, while synthesisbased priors seek a reconstruction of the signal as a combination of atom signals. In this paper we describe these two prior classes, focusing on the distinction between them. We show that although when reducing to the complete and under-complete formulations the two become equivalent, in their more interesting overcomplete formulation the two types depart. Focusing on the ℓ1 denoising case, we present several ways of comparing the two types of priors, establishing the existence of an unbridgeable gap between them. 1.

Random Cascades on Wavelet Trees and Their Use in Analyzing and Modeling Natural Images

by Martin J. Wainwright, Eero P. Simoncelli, Alan S. Willsky - Applied and Computational Harmonic Analysis , 2001
"... in signal and image processing, including image denoising, coding, and super-resolution. # 2001 Academic Press 1. INTRODUCTION Stochastic models of natural images underlie a variety of applications in image processing and low-level computer vision, including image coding, denoising and 1 MW supp ..."
Abstract - Cited by 98 (15 self) - Add to MetaCart
in signal and image processing, including image denoising, coding, and super-resolution. # 2001 Academic Press 1. INTRODUCTION Stochastic models of natural images underlie a variety of applications in image processing and low-level computer vision, including image coding, denoising and 1 MW supported by NSERC 1967 fellowship; AW and MW by AFOSR Grant F49620-98-1-0349 and ONR Grant N00014-91-J-1004. Address correspondence to MW. 2 ES supported by NSF Career Grant MIP-9796040 and an Alfred P. Sloan fellowship. 89 1063-5203/01 $35.00 Copyright # 2001 by Academic Press All rights of reproduction in any form reserved. 90 WAINWRIGHT, SIMONCELLI, AND WILLSKY restoration, interpolation and synthesis. Accordingly, the past decade has witnessed an increasing amount of research devoted to developing stochastic models of images (e.g., [19, 38, 45, 48, 55]). Simultaneously, wavel
(Show Context)

Citation Context

...age [15], a widely studied form of pointwise estimate, is equivalent to a MAP estimate with a certain GSM prior, namely, a Laplacian or generalized Gaussian distribution with tail exponent α = 1 (see=-= [4]). Spe-=-cifically, suppose that the prior on x has the form px(x) ∝ exp(−(λ/2)|x|) and that y is an observation of x contaminated by Gaussian noise of variance σ 2 . Under these assumptions, it is strai...

On the Equivalence of Soft Wavelet Shrinkage, Total Variation Diffusion, Total Variation Regularization, and SIDEs

by Gabriele Steidl, Joachim Weickert, Thomas Brox, Pavel Mrázek, Martin Welk - SIAM J. NUMER. ANAL , 2004
"... Soft wavelet shrinkage, total variation (TV) diffusion, TV regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the one-dimen ..."
Abstract - Cited by 89 (18 self) - Add to MetaCart
Soft wavelet shrinkage, total variation (TV) diffusion, TV regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the one-dimensional case. First, we prove that Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularization of two-pixel pairs. In the translationally invariant case we show that applying cycle spinning to Haar wavelet shrinkage on a single scale can be regarded as an absolutely stable explicit discretization of TV diffusion. We prove that space-discrete TV diffusion and TV regularization are identical and that they are also equivalent to the SIDEs system when a specific force function is chosen. Afterwards, we show that wavelet shrinkage on multiple scales can be regarded as a single step diffusion filtering or regularization of the Laplacian pyramid of the signal. We analyze possibilities to avoid Gibbs-like artifacts for multiscale Haar wavelet shrinkage by scaling the thresholds. Finally, we present experiments where hybrid methods are designed that combine the advantages of wavelets and PDE/variational approaches. These methods are based on iterated shift-invariant wavelet shrinkage at multiple scales with scaled thresholds.
(Show Context)

Citation Context

...hed. A book by Meyer [33] presents a unified view on wavelets and nonlinear evolutions, and Shen and Strang [43] have included wavelets into the solution of the linear heat equation. Chambolle et al. =-=[13]-=- showed that one may interpret wavelet shrinkage of functions as regularization processes in suitable Besov spaces. In particular, Haar thresholding was considered in [18]. Furthermore, Cohen et al. [...

Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing

by Wotao Yin, Stanley Osher, Donald Goldfarb, Jerome Darbon - SIAM J. IMAGING SCI , 2008
"... We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 insta ..."
Abstract - Cited by 84 (15 self) - Add to MetaCart
We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
(Show Context)

Citation Context

...xture model, Bioucas-Dias and Figueiredo for a recent “two-step” shrinkage-based algorithm [7], Blumensath and Davies [8] for solving a cardinality constrained least-squares problem, Chambolle et al. =-=[19]-=- for image denoising, Daubechies, Fornasier, and Loris [30] for a direct and accelerated projected gradient method, Elad, Matalon, and Zibulevsky in [41] for image denoising, Fadili and Starck [43] fo...

Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints

by Ingrid Daubechies, Massimo Fornasier, Ignace Loris - THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS , 2004
"... Regularization of ill-posed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implem ..."
Abstract - Cited by 79 (11 self) - Add to MetaCart
Regularization of ill-posed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to ℓ1-constraints, using a gradient method, with projection on ℓ1-balls. The corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.
(Show Context)

Citation Context

...τ . (8) Convergence of this algorithm was proved in [20]. Soft-thresholding plays a role in this problem because it leads to the unique minimizer of a functional combining ℓ2 and ℓ1−norms, i.e., (see =-=[10, 20]-=-) � � 2 Sτ(a) = arg min �x − a� + 2τ�x�1 . (9) x∈ℓ2(Λ) We will call the iteration (7) the iterative soft-thresholding algorithm or the thresholded Landweber iteration. 3 Discussion of the Thresholded ...

Incorporating Information on Neighboring Coefficients into Wavelet Estimation

by T. Tony Cai, Bernard W. Silverman , 1999
"... In standard wavelet methods, the empirical wavelet coefficients are thresholded term by term, on the basis of their individual magnitudes. Information on other coefficients has no influence on the treatment of particular coefficients. We propose a wavelet shrinkage method that incorporates informati ..."
Abstract - Cited by 75 (11 self) - Add to MetaCart
In standard wavelet methods, the empirical wavelet coefficients are thresholded term by term, on the basis of their individual magnitudes. Information on other coefficients has no influence on the treatment of particular coefficients. We propose a wavelet shrinkage method that incorporates information on neighboring coefficients into the decision making. The coefficients are considered in overlapping blocks; the treatment of coefficients in the middle of each block depends on the data in the whole block. The asymptotic and numerical performances of two particular versions of the estimator are investigated. We show that, asymptotically, one version of the estimator achieves the exact optimal rates of convergence over a range of Besov classes for global estimation, and attains adaptive minimax rate for estimating functions at a point. In numerical comparisons with various methods, both versions of the estimator perform excellently.

Image denoising via learned dictionaries and sparse representation

by Michael Elad, Michal Aharon - In CVPR , 2006
"... We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously tr ..."
Abstract - Cited by 71 (8 self) - Add to MetaCart
We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously trainining a dictionary on its (corrupted) content using the K-SVD algorithm. As the dictionary training algorithm is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm, with state-of-the-art performance, equivalent and sometimes surpassing recently published leading alternative denoising methods. 1.
(Show Context)

Citation Context

... been used in the past decade successfully for the denoising problem. Indeed, at first, sparsity of the unitary wavelet coefficients has been considered, leading to the celebrated shrinkage algorithm =-=[1, 2, 3, 4, 5, 6]-=-. One reason to turn to redundant representations was the desire to have the shift invariance property [7]. Also, with the growing realization that regular separable 1D wavelets are inappropriate for ...

FIXED-POINT CONTINUATION FOR ℓ1-MINIMIZATION: METHODOLOGY AND CONVERGENCE

by Elaine T. Hale, Wotao Yin, Yin Zhang
"... We present a framework for solving large-scale ℓ1-regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar µ; continuation refers ..."
Abstract - Cited by 68 (10 self) - Add to MetaCart
We present a framework for solving large-scale ℓ1-regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar µ; continuation refers to approximately following the path traced by the optimal value of x as µ increases. In this paper, we study the structure of optimal solution sets; prove finite convergence for important quantities; and establish q-linear convergence rates for the fixed-point algorithm applied to problems with f(x) convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.

Proximal thresholding algorithm for minimization over orthonormal bases

by Patrick L. Combettes, Jean-christophe Pesquet - SIAM Journal on Optimization , 2007
"... The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and inve ..."
Abstract - Cited by 62 (14 self) - Add to MetaCart
The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing in particular several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated. 1 Problem formulation Throughout this paper, H is a separable infinite-dimensional real Hilbert space with scalar product 〈 · | ·〉, norm ‖·‖, and distance d. Moreover, Γ0(H) denotes the class of proper lower semicontinuous convex functions from H to]−∞, +∞], and (ek)k∈N is an orthonormal basis of H. The standard denoising problem in signal theory consists of recovering the original form of a signal x ∈ H from an observation z = x + v, where v ∈ H is the realization of a noise process. In many instances, x is known to admit a sparse representation with respect to (ek)k∈N and an estimate x of x can be constructed by removing the coefficients of smallest magnitude in the 1 representation (〈z | ek〉)k∈N of z with respect to (ek)k∈N. A popular method consists of performing a so-called soft thresholding of each coefficient 〈z | ek 〉 at some predetermined level ωk ∈]0, +∞[, namely (see Fig. 1) (1.1) x = ∑ soft [−ωk,ωk] (〈z | ek〉)ek, where soft [−ωk,ωk] : ξ ↦ → sign(ξ) max{|ξ | − ωk, 0}. k∈N This approach has received considerable attention in various areas of applied mathematics ranging from nonlinear approximation theory to statistics, and from harmonic analysis to image processing;
(Show Context)

Citation Context

...has received considerable attention in various areas of applied mathematics ranging from nonlinear approximation theory to statistics, and from harmonic analysis to image processing; see for instance =-=[2, 7, 8, 19, 21, 27, 31]-=- and the references therein. From an optimization point of view, the vector x exhibited in (1.1) is simply the solution to the variational problem (1.2) minimize x∈H 1 2 ‖x − z‖2 + ∑ ωk |〈x | ek〉| . A...

Iterative thresholding algorithms

by Massimo Fornasier, Holger Rauhut - in Preprint, 2007. [Online]. Available : http ://www.dsp.ece.rice.edu/cs
"... This article provides a variational formulation for hard and firm thresholding. A related functional can be used to regularize inverse problems by sparsity constraints. We show that a damped hard or firm thresholded Landweber iteration converges to its minimizer. This provides an alternative to an a ..."
Abstract - Cited by 53 (10 self) - Add to MetaCart
This article provides a variational formulation for hard and firm thresholding. A related functional can be used to regularize inverse problems by sparsity constraints. We show that a damped hard or firm thresholded Landweber iteration converges to its minimizer. This provides an alternative to an algorithm recently studied by the authors. We prove stability of minimizers with respect to the parameters of the functional and its regularization properties by means of Γ-convergence. All investigations are done in the general setting of vector-valued (multi-channel) data.
(Show Context)

Citation Context

...hard thresholding operators have been extensively studied. While both have been used indifferently in the practice, from a theoretical point of view the first attracted most of the attention. In fact =-=[7]-=- established a variational formulation for denoising by ℓ1 penalization, which results in simple soft-thresholding. This interpretation has caught much attention due to its similarity and near-equival...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University