Results 1 
5 of
5
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 824 (16 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Total Variation Wavelet Inpainting
 J. Math. Imaging Vision
, 2006
"... We consider the problem of filling in missing or damaged wavelet coe#cients due to lossy image transmission or communication. The task is closely related to classical inpainting problems, but also remarkably di#ers in that the inpainting regions are in the wavelet domain. New challenges include that ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
We consider the problem of filling in missing or damaged wavelet coe#cients due to lossy image transmission or communication. The task is closely related to classical inpainting problems, but also remarkably di#ers in that the inpainting regions are in the wavelet domain. New challenges include that the resulting inpainting regions in the pixel domain are usually not well defined, as well as that degradation is often spatially inhomogeneous. Two novel variational models are proposed to meet such challenges, which combine the total variation (TV) minimization technique with wavelet representations. The associated EulerLagrange equations lead to nonlinear partial di#erential equations (PDE's) in the wavelet domain, and proper numerical algorithms and schemes are designed to handle their computation. The proposed models can have e#ective and automatic control over geometric features of the inpainted images, including the sharpness and curvature information of edges.
Theory and Computation of Variational Image Deblurring
, 2005
"... To recover a sharp image from its blurry observation is the problem known as image deblurring. It frequently arises in imaging sciences and technologies, including optical, medical, and astronomical applications, and is crucial for allowing to detect important features and patterns such as those of ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
To recover a sharp image from its blurry observation is the problem known as image deblurring. It frequently arises in imaging sciences and technologies, including optical, medical, and astronomical applications, and is crucial for allowing to detect important features and patterns such as those of a distant planet or some microscopic tissue. Mathematically, image deblurring is intimately connected to backward diffusion processes (e.g., inverting the heat equation), which are notoriously unstable. As inverse problem solvers, deblurring models therefore crucially depend upon proper regularizers or conditioners that help secure stability, often at the necessary cost of losing certain highfrequency details in the original images. Such regularization techniques can ensure the existence, uniqueness, or stability of deblurred images. The present work follows closely the general framework described in our recent monograph [18], but also contains more updated views and approaches to image deblurring, including, e.g., more discussion on stochastic signals, the Bayesian/Tikhonov approach to Wiener filtering, and the iteratedshrinkage algorithm of Daubechies et al. [30,31] for waveletbased deblurring. The work thus contributes to the development of generic, systematic, and unified frameworks in contemporary image processing.
Anisotropic smoothness classes: from finite element approximation to image models
, 2009
"... We propose and study quantitative measures of smoothness f ↦ → A(f) which are adapted to anisotropic features such as edges in images or shocks in PDE’s. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We propose and study quantitative measures of smoothness f ↦ → A(f) which are adapted to anisotropic features such as edges in images or shocks in PDE’s. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest example being Ap(f) = ‖ p det(d2f)‖Lτ which appears when approximating in the L p norm by piecewise linear elements when 1 1 = +1. The quantities A(f) are not seminorms, τ p and therefore cannot be used to define linear function spaces. We show that these quantities can be well defined by mollification when f has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used total variation seminorm which does not account for the smoothness of the edges. 1
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian