Results 1  10
of
13
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 80 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
Explicit constructions for compressed sensing of sparse signals
 In Proceedings of the 19th Annual ACMSIAM Symposium on Discrete Algorithms
, 2008
"... Over the recent years, a new approach for obtaining a succinct approximate representation of ndimensional ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Over the recent years, a new approach for obtaining a succinct approximate representation of ndimensional
Fast Bayesian Matching Pursuit: Model Uncertainty and Parameter Estimation for Sparse Linear Models
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2009
"... A lowcomplexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the un ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
A lowcomplexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability mixing parameters and an approximate MMSE estimate of the parameter vector. Exact ratios of posterior probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to observation noise or correlation among columns in the regressor matrix. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. If hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and maximum a posteriori probability model selection.
EXPLICIT CONSTRUCTIONS OF RIP MATRICES AND RELATED PROBLEMS
"... Abstract. We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large N and any n satisfying N 1−ε ≤ n ≤ N, we construct RIP matrices of order k 1/2+ε and constant δ −ε. This overcomes the natural barrier k = O(n 1/2) for p ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large N and any n satisfying N 1−ε ≤ n ≤ N, we construct RIP matrices of order k 1/2+ε and constant δ −ε. This overcomes the natural barrier k = O(n 1/2) for proofs based on small coherence, which are used in all previous explicit constructions of RIP matrices. Key ingredients in our proof are new estimates for sumsets in product sets and for exponential sums with the products of sets possessing special additive structure. We also give a construction of sets of n complex numbers whose kth moments are uniformly small for 1 ≤ k ≤ N (Turán’s power sum problem), which improves upon known explicit constructions when (log N) 1+o(1) ≤ n ≤ (log N) 4+o(1). This latter construction produces elementary explicit examples of n × N matrices that satisfy RIP and whose columns constitute a new spherical code; for those problems the parameters closely match those of existing constructions in the range (log N) 1+o(1) ≤ n ≤ (log N) 5/2+o(1). 1.
Sparse binary matrices of LDPC codes for compressed sensing
 in Data Compression Conference (DCC
, 2012
"... Abstract: Compressed sensing shows that one undetermined measurement matrix can losslessly compress sparse signals if this matrix satisfies Restricted Isometry Property (RIP). However, in practice there are still no explicit approaches to construct such matrices. Gaussian matrices and Fourier matric ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract: Compressed sensing shows that one undetermined measurement matrix can losslessly compress sparse signals if this matrix satisfies Restricted Isometry Property (RIP). However, in practice there are still no explicit approaches to construct such matrices. Gaussian matrices and Fourier matrices are first proved satisfying RIP with high probabilities. Recently, sparse random binary matrices with lower computation load also expose comparable performance with Gaussian matrices. But they are all constructed randomly, and unstable in orthogonality. In this paper, inspired by these observations, we propose to construct structured sparse binary matrices which are stable in orthogonality. The solution lies in the algorithms that construct paritycheck matrices of lowdensity paritycheck (LDPC) codes. Experiments verify that proposed matrices significantly outperform aforementioned three types of matrices. And significantly, for this type of matrices with a given size, the optimal matrix for compressed sensing can be approximated and constructed according to some rules. 1.
Bulgarian Acad. Sciences
"... We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large k and k 2−ε ≤ N ≤ k 2+ε, we construct RIP matrices of order k with n = O(k 2−ε). This overcomes the natural barrier n ≫ k 2 for proofs based on small coherence, wh ..."
Abstract
 Add to MetaCart
We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large k and k 2−ε ≤ N ≤ k 2+ε, we construct RIP matrices of order k with n = O(k 2−ε). This overcomes the natural barrier n ≫ k 2 for proofs based on small coherence, which are used in all previous explicit constructions of RIP matrices. Key ingredients in our proof are new estimates for sumsets in product sets and for exponential sums with the products of sets possessing special additive structure. Categories and Subject Descriptors E.4 [Coding and information theory]: Data compaction and compression
Compressive Sensing for Sparse Approximations: Constructions, Algorithms, and Analysis
, 2010
"... ..."
Hidden Cliques and the Certification of the Restricted Isometry Property
, 2012
"... Compressed sensing is a technique for finding sparse solutions to underdetermined linear systems. This technique relies on properties of the sensing matrix such as the restricted isometry property. Sensing matrices that satisfy this property with optimal parameters are mainly obtained via probabilis ..."
Abstract
 Add to MetaCart
Compressed sensing is a technique for finding sparse solutions to underdetermined linear systems. This technique relies on properties of the sensing matrix such as the restricted isometry property. Sensing matrices that satisfy this property with optimal parameters are mainly obtained via probabilistic arguments. Deciding whether a given matrix satisfies the restricted isometry property is a nontrivial computational problem. Indeed, we show in this paper that restricted isometry parameters cannot be approximated in polynomial time within any constant factor under the assumption that the hidden clique problem is hard. Moreover, on the positive side we propose an improvement on the bruteforce enumeration algorithm for checking the restricted isometry property. 1
Advances in sparse . . .
, 2009
"... The general problem of obtaining a useful succinct representation (sketch) of some piece of data is ubiquitous; it has applications in signal acquisition, data compression, sublinear space algorithms, etc. In this thesis we focus on sparse recovery, where the goal is to recover sparse vectors exact ..."
Abstract
 Add to MetaCart
The general problem of obtaining a useful succinct representation (sketch) of some piece of data is ubiquitous; it has applications in signal acquisition, data compression, sublinear space algorithms, etc. In this thesis we focus on sparse recovery, where the goal is to recover sparse vectors exactly, and to approximately recover nearlysparse vectors. More precisely, from the short representation of a vector x, we want to recover a vector x ∗ such that the approximation erorr ‖x − x ∗ ‖ is comparable to the “tail ” minx ′ ‖x − x ′ ‖ where x ′ ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years, notably in areas such as data stream computing and compressed sensing. We consider two types of sketches: linear and nonlinear. For the linear sketching case, where the compressed representation of x is Ax for a measurement matrix A, we introduce a class of binary sparse matrices as valid measurement matrices. We show that they can be used with the popular geometric “ℓ1 minimization ” recovery procedure. We also present two iterative recovery algorithms, Sparse Matching Pursuit and Sequential Sparse Matching Pursuit, that can be used with the same matrices. Thanks to the sparsity of the matrices, the resulting algorithms are