## Combinatorial Algorithms for Compressed Sensing (2006)

### Cached

### Download Links

- [dimacs.rutgers.edu]
- [dimacs.rutgers.edu]
- [www.research.att.com]
- [dimacs.rutgers.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | In Proc. of SIROCCO |

Citations: | 65 - 1 self |

### BibTeX

@INPROCEEDINGS{Cormode06combinatorialalgorithms,

author = {Graham Cormode},

title = {Combinatorial Algorithms for Compressed Sensing},

booktitle = {In Proc. of SIROCCO},

year = {2006},

pages = {280--294}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract — In sparse approximation theory, the fundamental problem is to reconstruct a signal A ∈ R n from linear measurements 〈A, ψi 〉 with respect to a dictionary of ψi’s. Recently, there is focus on the novel direction of Compressed Sensing [1] where the reconstruction can be done with very few—O(k log n)— linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [1], [2], [3] prove that there exists a single O(k log n) × n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications. In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, log n, 1/ε, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + ε and improves the reconstruction time from poly(n) to poly(k log n). Our second result is a randomized construction of O(k polylog(n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [4], [5], Streaming algorithms [6], [7], [8] and Complexity Theory [9] for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement. I.

### Citations

1718 | Compressed sensing
- Donoho
(Show Context)
Citation Context ...the fundamental problem is to reconstruct a signal A ∈ R n from linear measurements 〈A, ψi〉 with respect to a dictionary of ψi’s. Recently, there is focus on the novel direction of Compressed Sensing =-=[9]-=- where the reconstruction can be done with very few—O(k log n)— linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients... |

833 | Practical signal recovery from random projections
- Candès, Romberg
- 2005
(Show Context)
Citation Context ... are several other outstanding questions. For example, the time to obtain a representation from the measurements is significantly superlinear in n (it typically involves solving a Linear Program [1], =-=[2]-=-, [3]). For large signals, this cost is overly burdensome. Since we make a small number of measurements, it is much better to find algorithms with running time polynomial in the number of measurements... |

742 |
Stable signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...g [12] wireless communication [3] and generated implementations [13]; found mathematical applications to coding and information theory [14]; and extended the results to noisy and distributed settings =-=[15]-=-. The interest arises for two main reasons. First, there is deep mathematics underlying the results, with interpretations in terms of high dimensional geometry [3], uncertainty principles [2], and lin... |

628 | Constructive Approximation
- DeVore, Lorentz
- 1993
(Show Context)
Citation Context ...(A)ψi by the orthonormality of Ψ. 2 From now on (for convenience of reference only), we reorder the vectors in the dictionary so |θ1| ≥ |θ2| ≥ . . . ≥ |θn|. In the area of sparse approximation theory =-=[10]-=-, one seeks representations of A that are sparse, i.e., use few coefficients. Formally, R = � i∈K θiψi, for some set K of coefficients, |K| = k ≪ n. Clearly, R(A) cannot exactly equal the signal A for... |

185 | Learning decision trees using the Fourier spectrum
- Kushilevitz, Mansour
- 2005
(Show Context)
Citation Context ...rom the regime in earlier papers on Compressed Sensing where a fixed T works for all p-compressible signals, many results in the Computer Science literature apply, in particular, from learning theory =-=[4]-=-, [5], streaming algorithms [7], [6] and complexity theory [9]. Some of these results do not completely translate to our scenario: the learning theory approaches assume that the signal can be probed i... |

173 | What’s hot and what’s not: Tracking most frequent items dynamically
- CORMODE, MUTHUKRISHNAN
(Show Context)
Citation Context ...er-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [4], [5], Streaming algorithms [6], [7], =-=[8]-=- and Complexity Theory [9] for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algo... |

168 | Signal reconstruction from noisy random projections
- Haupt, Nowak
(Show Context)
Citation Context ...lt improves [18] in the term polylog(�A�2) which governs the number of iterations in [18]. Finally, we extend to the case when the measurements are noisy—an important practical concern articulated in =-=[19]-=-—and obtain novel results that give per-instance approximation results. Formally, we show: Theorem 4: We can construct a dictionary Ψ ′ = T Ψ of O( ck log3 n ε2 ) vectors, in time O(cn2 log n). For an... |

147 | Signal recovery from partial information via orthogonal matching pursuit
- Tropp, Gilbert
(Show Context)
Citation Context ...for signals that have �Rk opt − A�2 = 0, i.e., there are at most k non-zero coefficients. This “k-support” case is a simplification of realistic signals, but has attracted interest in prior work (see =-=[24]-=- and references therein). The same approach outlined above, of using a combination of measurements based on k ′ -strongly separating sets and k ′′ -separating sets, with an appropriate setting of k ′ ... |

115 |
Nonrandom binary superimposed codes
- Kautz, Singleton
- 1964
(Show Context)
Citation Context ...er. Explicit constructions of both collections of sets are known for arbitrary k and n. Strongly selective sets are used heavily in group testing [10], and can be constructed using superimposed codes =-=[19]-=- with m = O((k log n) 2 ). Indyk provided explicit constructions of k-selective collections of size l = O(k log O(1) n), where the power depends on the degree bounds of constructions of disperser grap... |

105 |
small-space algorithms for approximate histogram maintenance
- Fast
- 2002
(Show Context)
Citation Context ...ide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms =-=[11, 12, 6]-=- and Complexity Theory [1] for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algo... |

87 | Geometric approach to error correcting codes and reconstruction of signals
- Rudelson, Veshynin
- 2005
(Show Context)
Citation Context ... measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [1], [2], =-=[3]-=- prove that there exists a single O(k log n) × n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the c... |

79 | Near-optimal sparse fourier representations via sampling
- Gilbert, Guha, et al.
- 2002
(Show Context)
Citation Context ...uch per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [4], [5], Streaming algorithms [6], =-=[7]-=-, [8] and Complexity Theory [9] for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting... |

75 |
Selective families, superimposed codes, and broadcasting on unknown radio networks
- Clementi, Monti, et al.
- 2001
(Show Context)
Citation Context ...tructions of k-selective collections of size l = O(k log O(1) n), where the power depends on the degree bounds of constructions of disperser graphs [16]. Probabilistic constructions are also possible =-=[5]-=- of near-optimal size O(k log(n/k)), which yield a more expensive Las Vegas-style algorithm for constructing such a set in O(nk poly(k log n)): after randomly constructing a collection of sets, verify... |

74 |
Error correction via linear programming
- Candès, Rudelson, et al.
- 2005
(Show Context)
Citation Context ...11], [2], [3]; found interesting applications including MR imaging [12] wireless communication [3] and generated implementations [13]; found mathematical applications to coding and information theory =-=[14]-=-; and extended the results to noisy and distributed settings [15]. The interest arises for two main reasons. First, there is deep mathematics underlying the results, with interpretations in terms of h... |

53 |
Explicit constructions of selectors and related combinatorial structures, with applications
- Indyk
- 2002
(Show Context)
Citation Context ...th m = O((k log n) 2 ). Indyk provided explicit constructions of k-selective collections of size l = O(k log O(1) n), where the power depends on the degree bounds of constructions of disperser graphs =-=[16]-=-. Probabilistic constructions are also possible [5] of near-optimal size O(k log(n/k)), which yield a more expensive Las Vegas-style algorithm for constructing such a set in O(nk poly(k log n)): after... |

49 | Improved time bounds for near-optimal sparse fourier representation via sampling
- Gilbert, Muthukrishnan, et al.
- 2005
(Show Context)
Citation Context ...ents (this is similar to adaptive group testing). Other results can be thought of as producing a T with O(k 2+O(1) polylog(n)) rows which is improved by our result here. An exception is the result in =-=[18]-=- which works by sampling (that is, finding 〈A, vi〉 where vi,j = 1 for some j and is 0 elsewhere) for the Fourier basis, but can be thought of as solving our problem using O(k polylog(1/ε, log n, log �... |

38 |
Combinatorial Group Testing and its Applications
- Du, Hwang
- 2000
(Show Context)
Citation Context ...than k-selectivity, and so the former implies the latter. Explicit constructions of both collections of sets are known for arbitrary k and n. Strongly selective sets are used heavily in group testing =-=[10]-=-, and can be constructed using superimposed codes [19] with m = O((k log n) 2 ). Indyk provided explicit constructions of k-selective collections of size l = O(k log O(1) n), where the power depends o... |

36 |
small-space algorithms for approximate histogram maintenance
- Gilbert, Guha, et al.
- 2002
(Show Context)
Citation Context ...ide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [4], [5], Streaming algorithms =-=[6]-=-, [7], [8] and Complexity Theory [9] for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resu... |

26 | Towards an algorithmic theory of compressed sensing
- Cormode, Muthukrishnan
- 2005
(Show Context)
Citation Context ...s and provide a tighter analysis of the error in terms of �R k opt −A�2 rather than in terms of �A�2 as is more typical. Note. Preliminary versions of these results have appeared as technical reports =-=[20]-=-, which are superseded by the results here.sVI. CONCLUDING REMARKS We present a simple combinatorial approach of two sets of group tests with different separation properties that yields the first know... |

8 |
personal communication
- Indyk
(Show Context)
Citation Context ...the case of k-sparse signals, (which have no more than k nonzero coefficients) Indyk recently developed a set of measurements, near linear in k in number (but has other superlogarithmic factors in n) =-=[21]-=-. Another outstanding question is to tease apart other properties of Compressed Sensing results—such as their ability to measure in one basis and reconstruct in another—and study their algorithmics. A... |

5 |
Compressed sensing. http://www-stat.stanford.edu
- Donoho
(Show Context)
Citation Context ...the fundamental problem is to reconstruct a signal A ∈ R n from linear measurements 〈A, ψi〉 with respect to a dictionary of ψi’s. Recently, there is focus on the novel direction of Compressed Sensing =-=[1]-=- where the reconstruction can be done with very few—O(k log n)— linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients... |

3 |
Proving hard-core predicates by list decoding
- Akavia, Goldwasser, et al.
- 2003
(Show Context)
Citation Context ...guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [4], [5], Streaming algorithms [6], [7], [8] and Complexity Theory =-=[9]-=- for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to... |

2 |
Randomized interpoloation and approximation of sparse polynomials
- Mansour
- 1995
(Show Context)
Citation Context ...ssed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [4], =-=[5]-=-, Streaming algorithms [6], [7], [8] and Complexity Theory [9] for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to cert... |