## Greedy Basis Pursuit (2006)

Citations: | 7 - 0 self |

### BibTeX

@MISC{Huggins06greedybasis,

author = {Patrick S. Huggins and Steven W. Zucker},

title = {Greedy Basis Pursuit},

year = {2006}

}

### OpenURL

### Abstract

We introduce Greedy Basis Pursuit (GBP), a new algorithm for computing signal representations using overcomplete dictionaries. GBP is rooted in computational geometry and exploits an equivalence between minimizing the ℓ 1-norm of the representation coefficients and determining the intersection of the signal with the convex hull of the dictionary. GBP unifies the different advantages of previous algorithms: like standard approaches to Basis Pursuit, GBP computes representations that have minimum ℓ 1-norm; like greedy algorithms such as Matching Pursuit, GBP builds up representations, sequentially selecting atoms. We describe the algorithm, demonstrate its performance, and provide code. Experiments show that GBP can provide a fast alternative to standard linear programming approaches to Basis Pursuit.

### Citations

8940 |
Introduction to Algorithms
- Cormen
- 2001
(Show Context)
Citation Context ...e here. We also highlight some unexplored connections between sparse signal representation and linear programming. 2.1 Matching Pursuit Matching Pursuit (MP) [67] is the prototypical greedy algorithm =-=[20]-=- applied to sparse signal representation. MP is currently the most popular algorithm for computing sparse signal representations using an overcomplete dictionary, and is used in a variety of applicati... |

1864 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...en the minimal ℓ 1 -norm solution is equivalent to the minimal ℓ 0 -norm solution [34, 33, 46]. These findings have made BP useful for areas beyond signal representation, including compressed sensing =-=[32]-=- and error correcting codes [13], thus GBP may prove useful in these domains. 7 Conclusions We have described GBP, a new algorithm for Basis Pursuit, and demonstrated that it is faster than standard l... |

1775 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1999
(Show Context)
Citation Context ...e signal representation [36, 35]. As noted, part of the motivation for the development of BP is the observation that MP and OMP can fail to find sparse, in the ℓ 0 -norm sense, signal representations =-=[18]-=-, with much theoretical work showing under exactly what conditions BP finds sparse representations, i.e., when the minimal ℓ 1 -norm solution is equivalent to the minimal ℓ 0 -norm solution [34, 33, 4... |

1133 | Matching pursuits with time-frequency dictionaries
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...y contrast, in an overcomplete dictionary the number of atoms is greater than the dimensionality of the signal space and representation is no longer unique; this enables flexibility in representation =-=[67]-=-, ‘shiftability’ [88], and the use of multiple bases [57, 92], ∗ This work is partially supported by DARPA and AFOSR. † Department of Computer Science, Yale University, CT ‡ Department of Computer Sci... |

932 |
Theory of communication
- Gabor
- 1946
(Show Context)
Citation Context ...ameters t and f as G(t,f) = exp −σ2 t 2 cos(2πft), where t ∈ {0 : ∆t : 1} and f ∈ {0 : ∆f : d/2}, with ∆t = 2 j /d, σ = � π/2/∆t, and ∆f = σ/ √ 2π; the scale parameter j varied over {0,1,... ,8}. See =-=[47, 40]-=- for details and other sampling schemes. Once the atoms were defined, they were perturbed as in the random data case. Some samples from the final dictionary are shown in Figure 6. We show the running ... |

640 | Sparse coding with an overcomplete basis set: A strategy employed by V1?, Vision Research 37
- Olshausen, Field
- 1997
(Show Context)
Citation Context ... different properties from one optimized for BP. The design of dictionaries has only recently received attention in the signal processing community [27, 39, 2] 22s(for work in neural computation, see =-=[75, 64]-=-); our work suggests that the geometric properties of dictionaries play a crucial role in both the efficiency of representation algorithms and the quality of the resulting representations. Indeed, geo... |

559 | Greed is good: algorithmic results for sparse approximation
- Tropp
(Show Context)
Citation Context ...y, and has good approximation properties [67, 95, 53]. Moreover, one variant of MP, Orthogonal Matching Pursuit (OMP) [78], can be shown to compute nearly sparse representations under some conditions =-=[97]-=-. Basis Pursuit (BP), instead of seeking sparse representations directly, seeks representations that minimize the ℓ 1 -norm of the coefficients. By equating signal representation with ℓ 1 -norm minimi... |

459 | Shiftable multi-scale transforms
- Simoncelli, Freeman, et al.
- 1992
(Show Context)
Citation Context ...rcomplete dictionary the number of atoms is greater than the dimensionality of the signal space and representation is no longer unique; this enables flexibility in representation [67], ‘shiftability’ =-=[88]-=-, and the use of multiple bases [57, 92], ∗ This work is partially supported by DARPA and AFOSR. † Department of Computer Science, Yale University, CT ‡ Department of Computer Science, Yale University... |

437 |
K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
- Aharon, Elad, et al.
- 2006
(Show Context)
Citation Context ... for use with MP [27] or OMP [39] may well have very different properties from one optimized for BP. The design of dictionaries has only recently received attention in the signal processing community =-=[27, 39, 2]-=- 22s(for work in neural computation, see [75, 64]); our work suggests that the geometric properties of dictionaries play a crucial role in both the efficiency of representation algorithms and the qual... |

436 | Projection pursuit regression
- Friedman, Stuetzle
- 1981
(Show Context)
Citation Context ...entation, the signal corresponds to the data set, while the atoms correspond to the variables. In fact, MP was inspired by Projection Pursuit [45, 56], in particular its use as a regression algorithm =-=[44]-=-. Given this connection, it should not be surprising that some algorithmic ideas in sparse signal representation correspond to earlier work in regression. For example, in Forward Selection the optimal... |

392 | Optimally sparse representation in general (non-orthogonal) dictionaries via minimization
- Donoho, Elad
- 2002
(Show Context)
Citation Context ...BP can compute sparse solutions in situations where greedy algorithms fail [18]. Recent theoretical work shows that representations computed by BP are guaranteed to be sparse under certain conditions =-=[34, 33, 46]-=-. While applying standard linear programming methods to compute minimum ℓ 1 -norm signal representations is natural, such methods were developed with very different problems in mind and may not be ide... |

379 | Uncertainty principles and ideal atomic decomposition
- Donoho, Huo
(Show Context)
Citation Context ...BP can compute sparse solutions in situations where greedy algorithms fail [18]. Recent theoretical work shows that representations computed by BP are guaranteed to be sparse under certain conditions =-=[34, 33, 46]-=-. While applying standard linear programming methods to compute minimum ℓ 1 -norm signal representations is natural, such methods were developed with very different problems in mind and may not be ide... |

372 | Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition
- Pati, Rezaiifar, et al.
- 1993
(Show Context)
Citation Context ...rsity, by which the representation selected is the one that uses as few atoms as possible. Computing sparse representations is NP-hard [73, 28], and so several (heuristic) methods have been developed =-=[67, 78, 16, 51]-=-. These methods optimize various measures of sparsity, typically functions of the representation coefficients [61, 60], using, for example, greedy algorithms [67], gradient descent [64], linear progra... |

333 |
Sparse approximate solutions to linear systems
- Natarajan
- 1995
(Show Context)
Citation Context ... among the (many) possibile representations. A natural one is sparsity, by which the representation selected is the one that uses as few atoms as possible. Computing sparse representations is NP-hard =-=[73, 28]-=-, and so several (heuristic) methods have been developed [67, 78, 16, 51]. These methods optimize various measures of sparsity, typically functions of the representation coefficients [61, 60], using, ... |

298 |
On the implementation of a primal-dual interior point method
- Mehrotra
- 1992
(Show Context)
Citation Context ...ned as in [8]. The Optimization Toolbox version of the interior point method is essentially LIPSOL [99], a freely available interior point solver that implements Mehrotra’s predictor-corrector method =-=[68, 65]-=-. For each problem, all algorithms were run and timed. All algorithms were run under Matlab 7 on a 1.5GHz Pentium M processor running Windows XP, with 1.25GB memory. On all problems all algorithms ret... |

282 | Learning overcomplete representations
- Lewicki, Sejnowski
- 2000
(Show Context)
Citation Context ...ed [67, 78, 16, 51]. These methods optimize various measures of sparsity, typically functions of the representation coefficients [61, 60], using, for example, greedy algorithms [67], gradient descent =-=[64]-=-, linear programming [18], and global optimization [81]. Currently, the two most popular algorithms are Matching Pursuit [67] and Basis Pursuit [17, 18]. Matching Pursuit (MP) is a greedy algorithm: a... |

261 |
A project pursuit algorithm for exploratory data analysis
- Friedman, Tukey
- 1974
(Show Context)
Citation Context ... which to regress a data set [69]. In sparse signal representation, the signal corresponds to the data set, while the atoms correspond to the variables. In fact, MP was inspired by Projection Pursuit =-=[45, 56]-=-, in particular its use as a regression algorithm [44]. Given this connection, it should not be surprising that some algorithmic ideas in sparse signal representation correspond to earlier work in reg... |

243 |
Projection pursuit
- Huber
- 1985
(Show Context)
Citation Context ... which to regress a data set [69]. In sparse signal representation, the signal corresponds to the data set, while the atoms correspond to the variables. In fact, MP was inspired by Projection Pursuit =-=[45, 56]-=-, in particular its use as a regression algorithm [44]. Given this connection, it should not be surprising that some algorithmic ideas in sparse signal representation correspond to earlier work in reg... |

236 | Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm
- Gorodnitsky, Rao
- 1997
(Show Context)
Citation Context ...rsity, by which the representation selected is the one that uses as few atoms as possible. Computing sparse representations is NP-hard [73, 28], and so several (heuristic) methods have been developed =-=[67, 78, 16, 51]-=-. These methods optimize various measures of sparsity, typically functions of the representation coefficients [61, 60], using, for example, greedy algorithms [67], gradient descent [64], linear progra... |

170 | On sparse representations in arbitrary redundant bases
- Fuchs
(Show Context)
Citation Context ...BP can compute sparse solutions in situations where greedy algorithms fail [18]. Recent theoretical work shows that representations computed by BP are guaranteed to be sparse under certain conditions =-=[34, 33, 46]-=-. While applying standard linear programming methods to compute minimum ℓ 1 -norm signal representations is natural, such methods were developed with very different problems in mind and may not be ide... |

152 | Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time
- Spielman, Teng
- 2001
(Show Context)
Citation Context ... that the computationaly complexity of GBP is exponential in the worst-case [71]. Current results on the simplex algorithm suggest that GBP is likely to be polynomial in the average [12] and smoothed =-=[91]-=- cases. 4.3 Implementation Issues We briefly describe two obstacles that any implementation of GBP may encounter, degeneracy and numerical instability, and our approach to handling them. 14s4.3.1 Dege... |

147 |
The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus CDROM. Retrieved January 2000 from http://www.ldc.upenn.edu/Catalog/docs/TIMIT.html Goschke
- Garofolo, Lamel, et al.
- 1993
(Show Context)
Citation Context ...lab’s simplex algorithm.) 5.2 Running times: Speech data The speech data set consisted of 100 signal representation problems. Each problem consisted of a signal randomly drawn from the TIMIT database =-=[48]-=- and an overcomplete multiscale Gabor dictionary. Each signal comprised 256 samples (d = 256) and was randomly selected from the ‘train’ subset of the TIMIT database. The signals were mean centered an... |

128 | Basis pursuit
- Chen
- 1995
(Show Context)
Citation Context ...rsity, by which the representation selected is the one that uses as few atoms as possible. Computing sparse representations is NP-hard [73, 28], and so several (heuristic) methods have been developed =-=[67, 78, 16, 51]-=-. These methods optimize various measures of sparsity, typically functions of the representation coefficients [61, 60], using, for example, greedy algorithms [67], gradient descent [64], linear progra... |

121 | Underdetermined Blind Source Separation using Sparse
- Bofill, Zibulevsky
(Show Context)
Citation Context ...g [85]; to our knowledge, the converse is known [12] but never utilized to solve linear programming. We use this equivalence to drive GBP. A previous geometric interpretation of sparse representation =-=[11]-=- recognizes that in two dimensions BP computes representations with atoms that ‘enclose’ x. The intepretation provided here can be viewed as the generalization of this notion to higher dimensions. 4 T... |

120 | Sparse signal reconstruction perspective for source localization with sensor arrays
- Malioutov, Cetin, et al.
- 2005
(Show Context)
Citation Context ...resentations using an overcomplete dictionary arises in a wide range of signal processing applications [82, 31, 50], including image [7], audio [63], and video [3] compression and source localization =-=[66]-=-. The goal is to represent a given signal as a linear superposition of a small number of stored signals, called atoms, drawn from a larger set, called the dictionary. In traditional signal representat... |

114 | Sparse nonnegative solutions of underdetermined linear equations by linear programming
- Donoho, Tanner
- 2005
(Show Context)
Citation Context ... of representation algorithms and the quality of the resulting representations. Indeed, geometric considerations have already led to a better theoretical understanding of sparse signal representation =-=[36, 35]-=-. As noted, part of the motivation for the development of BP is the observation that MP and OMP can fail to find sparse, in the ℓ 0 -norm sense, signal representations [18], with much theoretical work... |

104 | Efficient coding of natural sounds
- Lewicki
- 2002
(Show Context)
Citation Context ... Introduction The problem of computing sparse signal representations using an overcomplete dictionary arises in a wide range of signal processing applications [82, 31, 50], including image [7], audio =-=[63]-=-, and video [3] compression and source localization [66]. The goal is to represent a given signal as a linear superposition of a small number of stored signals, called atoms, drawn from a larger set, ... |

99 |
Some remarks on greedy algorithms
- DeVore, Temlyakov
- 1996
(Show Context)
Citation Context ...reediness, MP intially selects an atom that is not part of the optimal sparse representation; as a result, many of the subsequent atoms selected by MP simply compensate for the poor initial selection =-=[30, 18]-=-. This shortcoming motivated the development of BP, which succeeds on these problems[18]; recent theoretical work explains this phenomenon [34, 33, 46]. 4sThese problems are also motivation for the de... |

97 |
On implementing Mehrotra’s predictor-corrector interior point method for linear programming
- Lustig, Marsten, et al.
- 1992
(Show Context)
Citation Context ...ned as in [8]. The Optimization Toolbox version of the interior point method is essentially LIPSOL [99], a freely available interior point solver that implements Mehrotra’s predictor-corrector method =-=[68, 65]-=-. For each problem, all algorithms were run and timed. All algorithms were run under Matlab 7 on a 1.5GHz Pentium M processor running Windows XP, with 1.25GB memory. On all problems all algorithms ret... |

89 |
Time-frequency localization operators: a geometric phase space approach
- Daubechies
- 1988
(Show Context)
Citation Context ...sy to implement, has a guaranteed exponential rate of convergence [67, 95, 53], and recovers relatively sparse solutions [97], particularly compared to earlier approaches such as the Method-of-Frames =-=[26, 18]-=-. A fundamental drawback of MP (and its variants) is its inability to compute truly sparse representations. It is possible to construct signal representation problems where, because of its greediness,... |

85 |
Redundant multiscale transforms and their application for morphological component analysis
- Starck, Elad, et al.
(Show Context)
Citation Context ...toms is greater than the dimensionality of the signal space and representation is no longer unique; this enables flexibility in representation [67], ‘shiftability’ [88], and the use of multiple bases =-=[57, 92]-=-, ∗ This work is partially supported by DARPA and AFOSR. † Department of Computer Science, Yale University, CT ‡ Department of Computer Science, Yale University, CT 1sbut it requires a criterion to se... |

80 |
Error correction via linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...n is equivalent to the minimal ℓ 0 -norm solution [34, 33, 46]. These findings have made BP useful for areas beyond signal representation, including compressed sensing [32] and error correcting codes =-=[13]-=-, thus GBP may prove useful in these domains. 7 Conclusions We have described GBP, a new algorithm for Basis Pursuit, and demonstrated that it is faster than standard linear programming methods on som... |

71 |
Constructing higher-dimensional convex hulls at logarithmic cost per face
- Seidel
- 1986
(Show Context)
Citation Context ...ghtforward to calculate the corresponding coefficients.) Thus BP is equivalent to finding the facet of conv(D) which intersects x. Computing this intersection is known to reduce to linear programming =-=[85]-=-; to our knowledge, the converse is known [12] but never utilized to solve linear programming. We use this equivalence to drive GBP. A previous geometric interpretation of sparse representation [11] r... |

70 | On the exponential convergence of matching pursuits in quasi-incoherent dictionaries, Information Theory
- Gribonval, Vandergheynst
- 2006
(Show Context)
Citation Context ... improves the representation at each iteration. While there is no guarantee that MP computes sparse representations, MP is easily implemented, converges quickly, and has good approximation properties =-=[67, 95, 53]-=-. Moreover, one variant of MP, Orthogonal Matching Pursuit (OMP) [78], can be shown to compute nearly sparse representations under some conditions [97]. Basis Pursuit (BP), instead of seeking sparse r... |

66 | Solving real-world linear programs: a decade and more of progress
- Bixby
(Show Context)
Citation Context ...ll relatively. The efficient solution of linear programming problems depends in a complicated way on the problem, the method of solution and its implementation, and the available resources; see Bixby =-=[9]-=-. Thus the relative success of GBP compared to the linear programming methods implemented in the Matlab Optimization Toolbox is partially a function of the specific methods used and their implementati... |

64 |
An algorithm for convex polytopes
- Chand, Kapur
- 1970
(Show Context)
Citation Context ...‘pushing’ a hyperplane onto the surface of the convex hull of the dictionary until it coincides with the supporting hyperplane containing the facet. This approach is inspired by gift-wrapping methods =-=[14, 59, 94]-=- for the convex hull problem in computational geometry [86]. To adapt gift-wrapping to the problem of finding a particular facet, we need to specify how the initial hyperplane is chosen and the direct... |

63 |
On the identification of the convex hull of a finite set of points in the plane
- Jarvis
- 1973
(Show Context)
Citation Context ...‘pushing’ a hyperplane onto the surface of the convex hull of the dictionary until it coincides with the supporting hyperplane containing the facet. This approach is inspired by gift-wrapping methods =-=[14, 59, 94]-=- for the convex hull problem in computational geometry [86]. To adapt gift-wrapping to the problem of finding a particular facet, we need to specify how the initial hyperplane is chosen and the direct... |

57 | Adaptive Nonlinear Approximations
- Davis
- 1994
(Show Context)
Citation Context ... further investigation is the dependence of the performance of GBP (and other sparse representation algorithms) on the structure of the dictionary. For example, a dictionary optimized for use with MP =-=[27]-=- or OMP [39] may well have very different properties from one optimized for BP. The design of dictionaries has only recently received attention in the signal processing community [27, 39, 2] 22s(for w... |

54 | Sparse components of images and optimal atomic decomposition
- Donoho
(Show Context)
Citation Context ...rogramming approaches to Basis Pursuit. 1 Introduction The problem of computing sparse signal representations using an overcomplete dictionary arises in a wide range of signal processing applications =-=[82, 31, 50]-=-, including image [7], audio [63], and video [3] compression and source localization [66]. The goal is to represent a given signal as a linear superposition of a small number of stored signals, called... |

52 |
The Simplex Method: a probabilistic analysis. Number 1 in Algorithms and Combinatorics
- Borgwardt
- 1980
(Show Context)
Citation Context ...ficients.) Thus BP is equivalent to finding the facet of conv(D) which intersects x. Computing this intersection is known to reduce to linear programming [85]; to our knowledge, the converse is known =-=[12]-=- but never utilized to solve linear programming. We use this equivalence to drive GBP. A previous geometric interpretation of sparse representation [11] recognizes that in two dimensions BP computes r... |

50 |
Implementing the simplex method: the initial basis
- Bixby
- 1990
(Show Context)
Citation Context ...ble Matlab implementation [70] of the revised simplex method [24]. The Optimization Toolbox version of the simplex method is the classical simplex method [25], with the initial basis determined as in =-=[8]-=-. The Optimization Toolbox version of the interior point method is essentially LIPSOL [99], a freely available interior point solver that implements Mehrotra’s predictor-corrector method [68, 65]. For... |

42 |
Matching pursuit filters applied to face identification
- Phillips
- 1998
(Show Context)
Citation Context ...lied to sparse signal representation. MP is currently the most popular algorithm for computing sparse signal representations using an overcomplete dictionary, and is used in a variety of applications =-=[7, 79, 3]-=-. MP has also spawned several variants [41, 58, 42], including Orthogonal Matching Pursuit (OMP) [78, 29], which itself has several variants [49, 21, 83]. MP computes a signal representation by greedi... |

39 | Matching pursuit of images
- Bergeaud, Mallat
- 1994
(Show Context)
Citation Context ... Pursuit. 1 Introduction The problem of computing sparse signal representations using an overcomplete dictionary arises in a wide range of signal processing applications [82, 31, 50], including image =-=[7]-=-, audio [63], and video [3] compression and source localization [66]. The goal is to represent a given signal as a linear superposition of a small number of stored signals, called atoms, drawn from a ... |

39 |
Adaptive greedy approximations. Constructive approximation
- Davis, Mallat, et al.
- 1997
(Show Context)
Citation Context ... among the (many) possibile representations. A natural one is sparsity, by which the representation selected is the one that uses as few atoms as possible. Computing sparse representations is NP-hard =-=[73, 28]-=-, and so several (heuristic) methods have been developed [67, 78, 16, 51]. These methods optimize various measures of sparsity, typically functions of the representation coefficients [61, 60], using, ... |

39 |
Optimized orthogonal matching pursuit approach
- Rebollo-Neira, Lowe
- 2002
(Show Context)
Citation Context ...ionary, and is used in a variety of applications [7, 79, 3]. MP has also spawned several variants [41, 58, 42], including Orthogonal Matching Pursuit (OMP) [78, 29], which itself has several variants =-=[49, 21, 83]-=-. MP computes a signal representation by greedily constructing a sequence of approximations to the signal, ˜x (0) , ˜x (1) , ˜x (2) ,..., where each consecutive approximation is closer to the signal. ... |

38 |
Neighborliness of randomly projected simplices in high dimensions
- Donoho, Tanner
- 2005
(Show Context)
Citation Context ... of representation algorithms and the quality of the resulting representations. Indeed, geometric considerations have already led to a better theoretical understanding of sparse signal representation =-=[36, 35]-=-. As noted, part of the motivation for the development of BP is the observation that MP and OMP can fail to find sparse, in the ℓ 0 -norm sense, signal representations [18], with much theoretical work... |

35 |
Generalized simplex method for minimizing a linear from under linear inequality constraints // Pacific
- Dantzig, Orden, et al.
(Show Context)
Citation Context ...imization Toolbox 3.0 [1], and a freely available Matlab implementation [70] of the revised simplex method [24]. The Optimization Toolbox version of the simplex method is the classical simplex method =-=[25]-=-, with the initial basis determined as in [8]. The Optimization Toolbox version of the interior point method is essentially LIPSOL [99], a freely available interior point solver that implements Mehrot... |

32 |
Finding the convex hull facet by facet
- Swart
- 1985
(Show Context)
Citation Context ...‘pushing’ a hyperplane onto the surface of the convex hull of the dictionary until it coincides with the supporting hyperplane containing the facet. This approach is inspired by gift-wrapping methods =-=[14, 59, 94]-=- for the convex hull problem in computational geometry [86]. To adapt gift-wrapping to the problem of finding a particular facet, we need to specify how the initial hyperplane is chosen and the direct... |

30 |
Signal representation using adaptive normalized gaussian functions
- Qian
- 1994
(Show Context)
Citation Context ...sures of sparsity, typically functions of the representation coefficients [61, 60], using, for example, greedy algorithms [67], gradient descent [64], linear programming [18], and global optimization =-=[81]-=-. Currently, the two most popular algorithms are Matching Pursuit [67] and Basis Pursuit [17, 18]. Matching Pursuit (MP) is a greedy algorithm: a signal representation is iteratively built up by selec... |

29 | Video compression using matching pursuits
- Al-Shaykh, Miloslavsky, et al.
- 1999
(Show Context)
Citation Context ...e problem of computing sparse signal representations using an overcomplete dictionary arises in a wide range of signal processing applications [82, 31, 50], including image [7], audio [63], and video =-=[3]-=- compression and source localization [66]. The goal is to represent a given signal as a linear superposition of a small number of stored signals, called atoms, drawn from a larger set, called the dict... |