Results 1 
7 of
7
Stable recovery of sparse overcomplete representations in the presence of noise
 IEEE TRANS. INFORM. THEORY
, 2006
"... Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes t ..."
Abstract

Cited by 318 (20 self)
 Add to MetaCart
(Show Context)
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimalsparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Sparse representation for color image restoration
 the IEEE Trans. on Image Processing
, 2007
"... Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted ..."
Abstract

Cited by 118 (27 self)
 Add to MetaCart
(Show Context)
Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The KSVD has been recently proposed for this task [1], and shown to perform very well for various grayscale image processing tasks. In this paper we address the problem of learning dictionaries for color images and extend the KSVDbased grayscale image denoising algorithm that appears in [2]. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to stateoftheart results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. EDICS Category: COLCOLR (Color processing) I.
Learning multiscale sparse representations for image and video restoration
, 2007
"... Abstract. This paper presents a framework for learning multiscale sparse representations of color images and video with overcomplete dictionaries. A singlescale KSVD algorithm was introduced in [1], formulating sparse dictionary learning for grayscale image representation as an optimization proble ..."
Abstract

Cited by 60 (18 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents a framework for learning multiscale sparse representations of color images and video with overcomplete dictionaries. A singlescale KSVD algorithm was introduced in [1], formulating sparse dictionary learning for grayscale image representation as an optimization problem, efficiently solved via Orthogonal Matching Pursuit (OMP) and Singular Value Decomposition (SVD). Following this work, we propose a multiscale learned representation, obtained by using an efficient quadtree decomposition of the learned dictionary, and overlapping image patches. The proposed framework provides an alternative to predefined dictionaries such as wavelets, and shown to lead to stateoftheart results in a number of image and video enhancement and restoration applications. This paper describes the proposed framework, and accompanies it by numerous examples demonstrating its strength. Key words. Image and video processing, sparsity, dictionary, multiscale representation, denoising, inpainting, interpolation, learning. AMS subject classifications. 49M27, 62H35
Image denoising via learned dictionaries and sparse representation
 In CVPR
, 2006
"... We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously tr ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
(Show Context)
We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously trainining a dictionary on its (corrupted) content using the KSVD algorithm. As the dictionary training algorithm is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm, with stateoftheart performance, equivalent and sometimes surpassing recently published leading alternative denoising methods. 1.
Sparse Representations are Most Likely to be the Sparsest Possible
 EURASIP Journal on Applied Signal Processing, Paper No. 96247
, 2004
"... and a full rank matrix D with N < L, we define the signal's overcomplete representations as all # satisfying S = D#. Among all the possible solutions, we have special interest in the sparsest one  the one minimizing 0 . Previous work has established that a representation is uni ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
and a full rank matrix D with N < L, we define the signal's overcomplete representations as all # satisfying S = D#. Among all the possible solutions, we have special interest in the sparsest one  the one minimizing 0 . Previous work has established that a representation is unique if it is sparse enough, requiring 0 < Spark(D)/2.
Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
"... Abstract—We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the KSVD algorithm, we obtain a dictionary that de ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the KSVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of highquality image database. Since the KSVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a stateoftheart denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods. Index Terms—Bayesian reconstruction, dictionary learning, discrete cosine transform (DCT), image denoising, KSVD, matching pursuit, maximum a posteriori (MAP) estimation, redundancy, sparse representations. I.
Learning Hierarchical and Topographic Dictionaries with Structured Sparsity
"... Recent work in signal processing and statistics have focused on defining new regularization functions, which not only induce sparsity of the solution, but also take into account the structure of the problem. 1–7 We present in this paper a class of convex penalties introduced in the machine learning ..."
Abstract
 Add to MetaCart
Recent work in signal processing and statistics have focused on defining new regularization functions, which not only induce sparsity of the solution, but also take into account the structure of the problem. 1–7 We present in this paper a class of convex penalties introduced in the machine learning community, which take the form of a sum of ℓ2 and ℓ∞norms over groups of variables. They extend the classical groupsparsity regularization8–10 in the sense that the groups possibly overlap, allowing more flexibility in the group design. We review efficient optimization methods to deal with the corresponding inverse problems, 11–13 and their application to the problem of learning dictionaries of natural image patches: 14–18 On the one hand, dictionary learning has indeed proven effective for various signal processing tasks. 17, 19 On the other hand, structured sparsity provides a natural framework for modeling dependencies between dictionary elements. We thus consider a structured sparse regularization to learn dictionaries embedded in a particular structure, for instance a tree11 or a twodimensional grid. 20 In the latter case, the results we obtain are similar to the dictionaries produced by topographic independent component analysis.