## A weighted average of sparse representations is better than the sparsest one alone (2008)

Citations: | 7 - 1 self |

### BibTeX

@MISC{Elad08aweighted,

author = {Michael Elad and Irad Yavneh},

title = {A weighted average of sparse representations is better than the sparsest one alone},

year = {2008}

}

### OpenURL

### Abstract

Cleaning of noise from signals is a classical and long-studied problem in signal processing. Algorithms for this task necessarily rely on an a-priori knowledge about the signal characteristics, along with information about the noise properties. For signals that admit sparse representations over a known dictionary, a commonly used denoising technique is to seek the sparsest representation that synthesizes a signal close enough to the corrupted one. As this problem is too complex in general, approximation methods, such as greedy pursuit algorithms, are often employed. In this line of reasoning, we are led to believe that detection of the sparsest representation is key in the success of the denoising goal. Does this mean that other competitive and slightly inferior sparse representations are meaningless? Suppose we are served with a group of competing sparse representations, each claiming to explain the signal differently. Can those be fused somehow to lead to a better result? Surprisingly, the answer to this question is positive; merging these representations can form a more accurate, yet dense, estimate of the original signal even when the latter is known to be sparse. In this paper we demonstrate this behavior, propose a practical way to generate such a collection of representations by randomizing the Orthogonal Matching Pursuit (OMP) algorithm, and produce a clear analytical justification for the superiority of the associated Randomized OMP (RandOMP) algorithm. We show that while the Maximum a-posterior Probability (MAP) estimator aims to find and use the sparsest representation, the Minimum Mean-Squared-Error (MMSE) estimator leads to a fusion of representations to form its result. Thus, working with an appropriate mixture of candidate representations, we are surpassing the MAP and tending towards the MMSE estimate, and thereby getting a far more accurate estimation, especially at medium and low SNR.

### Citations

1651 | Atomic decomposition by basis pursuit - Chen, Donoho, et al. - 1999 |

524 | Greed is good: Algorithmic results for sparse approximation
- Tropp
- 2004
(Show Context)
Citation Context ...imation technique is the Orthogonal Matching Pursuit (OMP), a greedy algorithm that accumulates one atom at a time in forming ˆα, aiming at each step to minimize the representation error ‖y − Dα‖ 2 2 =-=[2, 3, 5, 6, 19, 21, 23]-=-. When this error falls below some 3predetermined threshold, or when the number of atoms reaches a destination value, this process stops. While crude, this technique works very fast and can guarantee... |

467 | E.: Learning low-level vision
- Freeman, Pasztor
(Show Context)
Citation Context ...sparsity should tend towards k = 1, since almost every original signal is available as an atom (possibly up to a scale). This extreme case is exactly the one practiced in direct example-based methods =-=[1, 14, 15, 22, 7]-=-. Suppose we are given many instances of noisy signals {yi} N i=1. We refer to those as our training data, and form a dictionary D by simply concatenating them as our atoms. When aiming to denoise a n... |

350 | Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition
- Pati, Rezaiifar, et al.
- 1993
(Show Context)
Citation Context ...imation technique is the Orthogonal Matching Pursuit (OMP), a greedy algorithm that accumulates one atom at a time in forming ˆα, aiming at each step to minimize the representation error ‖y − Dα‖ 2 2 =-=[2, 3, 5, 6, 19, 21, 23]-=-. When this error falls below some 3predetermined threshold, or when the number of atoms reaches a destination value, this process stops. While crude, this technique works very fast and can guarantee... |

342 | For most large underdetermined systems of linear equations the minimal l 1-norm solution is also the sparsest solution
- DONOHO
(Show Context)
Citation Context ...ear-optimal results in some cases. How good is the denoising obtained by the above approach? Past work provides some preliminary, both theoretical and empirical, answers to this and related questions =-=[2, 8, 9, 10, 12, 13, 16, 17, 24, 25]-=-. Most of this work concentrates on the accuracy with which one can approximate the true representation (rather than the signal itself), adopting a worstcase point of view. Indeed, the only work that ... |

316 |
Sparse approximate solutions to linear systems
- Natarajan
- 1995
(Show Context)
Citation Context ...nce ˆα is found, the denoising result is obtained by ˆx = Dˆα. The problem posed in Equation (1) is too complex in general, requiring a combinatorial search that explores all possible sparse supports =-=[20]-=-. Approximation methods are therefore often employed, with the understanding that their result may deviate from the true solution. One such approximation technique is the Orthogonal Matching Pursuit (... |

305 | Limits on super-resolution and how to break them
- Baker, Kanade
- 2002
(Show Context)
Citation Context ...sparsity should tend towards k = 1, since almost every original signal is available as an atom (possibly up to a scale). This extreme case is exactly the one practiced in direct example-based methods =-=[1, 14, 15, 22, 7]-=-. Suppose we are given many instances of noisy signals {yi} N i=1. We refer to those as our training data, and form a dictionary D by simply concatenating them as our atoms. When aiming to denoise a n... |

297 | Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
(Show Context)
Citation Context ...presentation α, how can we denoise a corrupted version of it, y? A commonly used denoising technique is to seek the sparsest representation that synthesizes a signal close enough to the corrupted one =-=[2, 9, 10, 11, 12, 13, 16, 17, 19, 24, 25]-=-. Put formally, one way to define our task is given by ˆα = arg min α ‖α‖0 + λ‖y − Dα‖ 2 2. (1) The first penalty directs the minimization task towards the sparsest possible representation, exploiting... |

267 |
Image denoising via sparse and redundant representations over learned dictionaries
- Elad, Aharon
- 2006
(Show Context)
Citation Context ...presentation α, how can we denoise a corrupted version of it, y? A commonly used denoising technique is to seek the sparsest representation that synthesizes a signal close enough to the corrupted one =-=[2, 9, 10, 11, 12, 13, 16, 17, 19, 24, 25]-=-. Put formally, one way to define our task is given by ˆα = arg min α ‖α‖0 + λ‖y − Dα‖ 2 2. (1) The first penalty directs the minimization task towards the sparsest possible representation, exploiting... |

229 | Example-Based Super-Resolution
- Freeman, Jones, et al.
- 2002
(Show Context)
Citation Context ...sparsity should tend towards k = 1, since almost every original signal is available as an atom (possibly up to a scale). This extreme case is exactly the one practiced in direct example-based methods =-=[1, 14, 15, 22, 7]-=-. Suppose we are given many instances of noisy signals {yi} N i=1. We refer to those as our training data, and form a dictionary D by simply concatenating them as our atoms. When aiming to denoise a n... |

202 | From sparse solutions of systems of equations to sparse modeling of signals and images
- Bruckstein, Donoho, et al.
- 2009
(Show Context)
Citation Context ...ibe the a-priori knowledge about the signal characteristics. Among these, a recently emerging model for signals that attracts much attention is one that relies on sparse and redundant representations =-=[18, 2]-=-. This model will be the focus of the work presented here. 1.2 Sparse and Redundant Representations A signal x is said to have a sparse representation over a known dictionary D ∈ Rn×m (we typically as... |

153 |
Orthogonal Least Squares Methods and their Application to Non-Linear System Identification
- Chen, Billings, et al.
- 1989
(Show Context)
Citation Context ... more relevant to denoising in cases when the noise power is fixed and known, as in the case studied here. Figure 1 presents the OMP algorithm with a stopping rule that depends on the residual energy =-=[2, 3, 5, 6, 19, 21]-=-. At each iteration, the set {ɛ(j)} m j=1 is computed, whose jth term indicates the error that would remain if atom j is added to the current solution. The atom chosen is the one yielding the smallest... |

114 | Adaptive greedy approximations
- Davis, Mallat, et al.
- 1997
(Show Context)
Citation Context ...imation technique is the Orthogonal Matching Pursuit (OMP), a greedy algorithm that accumulates one atom at a time in forming ˆα, aiming at each step to minimize the representation error ‖y − Dα‖ 2 2 =-=[2, 3, 5, 6, 19, 21, 23]-=-. When this error falls below some 3predetermined threshold, or when the number of atoms reaches a destination value, this process stops. While crude, this technique works very fast and can guarantee... |

90 | Just relax: Convex programming methods for subset selection and sparse approximation
- Tropp
- 2004
(Show Context)
Citation Context ...presentation α, how can we denoise a corrupted version of it, y? A commonly used denoising technique is to seek the sparsest representation that synthesizes a signal close enough to the corrupted one =-=[2, 9, 10, 11, 12, 13, 16, 17, 19, 24, 25]-=-. Put formally, one way to define our task is given by ˆα = arg min α ‖α‖0 + λ‖y − Dα‖ 2 2. (1) The first penalty directs the minimization task towards the sparsest possible representation, exploiting... |

79 | Recovery of exact sparse representations in the presence of noise
- Fuchs
- 2005
(Show Context)
Citation Context |

75 |
Adaptive time-frequency decompositions
- Davis, Mallat, et al.
- 1994
(Show Context)
Citation Context |

30 | Denoising by sparse approximation: Error bounds based on rate-distortion theory
- Fletcher, Rangan, et al.
(Show Context)
Citation Context |

30 | A simple test to check the optimality of a sparse signal approximation
- Gribonval, Ventura, et al.
- 2006
(Show Context)
Citation Context |

19 | A sampled texture prior for image super-resolution
- Pickup, Roberts, et al.
- 2003
(Show Context)
Citation Context |

13 | Noise sensitivity of sparse signal representations: Reconstruction error bounds for the inverse problem
- Wohlberg
- 2003
(Show Context)
Citation Context |

9 |
A wavelet tour of signal processing, academic press
- Mallat
- 1998
(Show Context)
Citation Context ...ibe the a-priori knowledge about the signal characteristics. Among these, a recently emerging model for signals that attracts much attention is one that relies on sparse and redundant representations =-=[18, 2]-=-. This model will be the focus of the work presented here. 1.2 Sparse and Redundant Representations A signal x is said to have a sparse representation over a known dictionary D ∈ Rn×m (we typically as... |

6 |
Example-based single image super-resolution: A global map approach with outlier rejection
- Datsenko, Elad
- 2007
(Show Context)
Citation Context |

3 |
Analysis of denoising by sparse approximation with random frame asymptotics
- Fletcher, Rangan, et al.
- 2005
(Show Context)
Citation Context |