Results 1  10
of
12
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
Explicit constructions for compressed sensing of sparse signals
 In Proceedings of the 19th Annual ACMSIAM Symposium on Discrete Algorithms
, 2008
"... Over the recent years, a new approach for obtaining a succinct approximate representation of ndimensional ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
Over the recent years, a new approach for obtaining a succinct approximate representation of ndimensional
Fast Bayesian Matching Pursuit: Model Uncertainty and Parameter Estimation for Sparse Linear Models
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2009
"... A lowcomplexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the un ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
A lowcomplexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability mixing parameters and an approximate MMSE estimate of the parameter vector. Exact ratios of posterior probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to observation noise or correlation among columns in the regressor matrix. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. If hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and maximum a posteriori probability model selection.
EXPLICIT CONSTRUCTIONS OF RIP MATRICES AND RELATED PROBLEMS
"... Abstract. We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large N and any n satisfying N 1−ε ≤ n ≤ N, we construct RIP matrices of order k 1/2+ε and constant δ −ε. This overcomes the natural barrier k = O(n 1/2) for proo ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large N and any n satisfying N 1−ε ≤ n ≤ N, we construct RIP matrices of order k 1/2+ε and constant δ −ε. This overcomes the natural barrier k = O(n 1/2) for proofs based on small coherence, which are used in all previous explicit constructions of RIP matrices. Key ingredients in our proof are new estimates for sumsets in product sets and for exponential sums with the products of sets possessing special additive structure. We also give a construction of sets of n complex numbers whose kth moments are uniformly small for 1 ≤ k ≤ N (Turán’s power sum problem), which improves upon known explicit constructions when (log N) 1+o(1) ≤ n ≤ (log N) 4+o(1). This latter construction produces elementary explicit examples of n × N matrices that satisfy RIP and whose columns constitute a new spherical code; for those problems the parameters closely match those of existing constructions in the range (log N) 1+o(1) ≤ n ≤ (log N) 5/2+o(1). 1.
Sparse binary matrices of LDPC codes for compressed sensing
 in Data Compression Conference (DCC
, 2012
"... Abstract: Compressed sensing shows that one undetermined measurement matrix can losslessly compress sparse signals if this matrix satisfies Restricted Isometry Property (RIP). However, in practice there are still no explicit approaches to construct such matrices. Gaussian matrices and Fourier matric ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract: Compressed sensing shows that one undetermined measurement matrix can losslessly compress sparse signals if this matrix satisfies Restricted Isometry Property (RIP). However, in practice there are still no explicit approaches to construct such matrices. Gaussian matrices and Fourier matrices are first proved satisfying RIP with high probabilities. Recently, sparse random binary matrices with lower computation load also expose comparable performance with Gaussian matrices. But they are all constructed randomly, and unstable in orthogonality. In this paper, inspired by these observations, we propose to construct structured sparse binary matrices which are stable in orthogonality. The solution lies in the algorithms that construct paritycheck matrices of lowdensity paritycheck (LDPC) codes. Experiments verify that proposed matrices significantly outperform aforementioned three types of matrices. And significantly, for this type of matrices with a given size, the optimal matrix for compressed sensing can be approximated and constructed according to some rules. 1.
Finding Dense Structures in Graphs and Matrices
, 2012
"... We will study several questions with a common theme of finding structure in graphs and matrices. In particular, in graphs we study problems related to finding dense induced subgraphs. Many of these questions have been studied extensively, such as the problem of finding large cliques in a graph, and ..."
Abstract
 Add to MetaCart
We will study several questions with a common theme of finding structure in graphs and matrices. In particular, in graphs we study problems related to finding dense induced subgraphs. Many of these questions have been studied extensively, such as the problem of finding large cliques in a graph, and more recently, the smallsetexpansion conjecture. Problems of this nature also arise in many contexts in practice, such as in finding communities in social networks, and in understanding properties of the web graph. We then study questions related to the spectra of matrices. Singular values of matrices are used extensively to extract structure from matrices (for instance, in principal component analysis). We will study a generalization of the maximum singular value, namely the q ↦ → p norm of a matrix A (denoted ‖A‖q↦→p) and the complexity of approximating this quantity. The question of approximating ‖A‖q↦→p turns out to have many flavors for different values of p, q, which we will explore in detail. The technical contributions of the thesis can be summarized as follows:
Bulgarian Acad. Sciences
"... We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large k and k 2−ε ≤ N ≤ k 2+ε, we construct RIP matrices of order k with n = O(k 2−ε). This overcomes the natural barrier n ≫ k 2 for proofs based on small coherence, which ..."
Abstract
 Add to MetaCart
We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large k and k 2−ε ≤ N ≤ k 2+ε, we construct RIP matrices of order k with n = O(k 2−ε). This overcomes the natural barrier n ≫ k 2 for proofs based on small coherence, which are used in all previous explicit constructions of RIP matrices. Key ingredients in our proof are new estimates for sumsets in product sets and for exponential sums with the products of sets possessing special additive structure. Categories and Subject Descriptors E.4 [Coding and information theory]: Data compaction and compression
On Partially Sparse Recovery
, 2011
"... In this paper we consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the ℓ1norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to the classical problem in Compressed Sensing ..."
Abstract
 Add to MetaCart
In this paper we consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the ℓ1norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to the classical problem in Compressed Sensing where the ℓ1norm of the whole solution vector is minimized. We introduce analogues of restricted isometry and null space properties for the recovery of partially sparse vectors and show that these new properties are implied by their original counterparts. We show also how to extend recovery under noisy measurements to the partially sparse case.
Hidden Cliques and the Certification of the Restricted Isometry Property
, 2012
"... Compressed sensing is a technique for finding sparse solutions to underdetermined linear systems. This technique relies on properties of the sensing matrix such as the restricted isometry property. Sensing matrices that satisfy this property with optimal parameters are mainly obtained via probabilis ..."
Abstract
 Add to MetaCart
Compressed sensing is a technique for finding sparse solutions to underdetermined linear systems. This technique relies on properties of the sensing matrix such as the restricted isometry property. Sensing matrices that satisfy this property with optimal parameters are mainly obtained via probabilistic arguments. Deciding whether a given matrix satisfies the restricted isometry property is a nontrivial computational problem. Indeed, we show in this paper that restricted isometry parameters cannot be approximated in polynomial time within any constant factor under the assumption that the hidden clique problem is hard. Moreover, on the positive side we propose an improvement on the bruteforce enumeration algorithm for checking the restricted isometry property. 1