Results 1  10
of
10
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
Sparse recovery using sparse random matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach “works” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time.
1 Sparse Recovery Using Sparse Matrices
"... Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to seve ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to several areas, including compressive sensing, data stream computing and group testing. I.
Sparse recovery using sparse matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and ‖x # ‖1 is minimal. It is known that this approach ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and ‖x # ‖1 is minimal. It is known that this approach “works ” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good ” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time. 1
Efficient Sketches for the Set Query Problem ∗
"... We develop an algorithm for estimating the values of a vector x ∈ R n over a support S of size k from a randomized sparse binary linear sketch Ax of size O(k). Given Ax and S, we can recover x ′ with ‖x ′ − xS‖ 2 ≤ ɛ ‖x − xS‖ ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We develop an algorithm for estimating the values of a vector x ∈ R n over a support S of size k from a randomized sparse binary linear sketch Ax of size O(k). Given Ax and S, we can recover x ′ with ‖x ′ − xS‖ 2 ≤ ɛ ‖x − xS‖
Compressive Sensing with Local Geometric Features
"... We propose a framework for compressive sensing of images with local geometric features. Specifically, let x ∈ R N be an Npixel image, where each pixel p has value xp. The image is acquired by computing the measurement vector Ax, where A is an m × N measurement matrix for some m ≪ N. The goal is the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We propose a framework for compressive sensing of images with local geometric features. Specifically, let x ∈ R N be an Npixel image, where each pixel p has value xp. The image is acquired by computing the measurement vector Ax, where A is an m × N measurement matrix for some m ≪ N. The goal is then to design the matrix A and recovery algorithm which, given Ax, returns an approximation to x. In this paper we investigate this problem for the case where x consists of a small number (k) of “local geometric objects ” (e.g., stars in an image of a sky), plus noise. We construct a matrix A and recovery algorithm with the following features: (i) the number of measurements m is O(k logk N), which undercuts currently known schemes that achieve m = O(k log(N/k)) (ii) the matrix A is ultrasparse, which is important for hardware considerations (iii) the recovery algorithm is fast and runs in time sublinear in N. We also present a comprehensive study of an application of our algorithm to a problem in satellite navigation.
Sparse Recovery with Partial Support Knowledge ⋆
"... Abstract. The goal of sparse recovery is to recover the (approximately) best ksparse approximation ˆx of an ndimensional vector x from linear measurements Ax of x. We consider a variant of the problem which takes into account partial knowledge about the signal. In particular, we focus on the scena ..."
Abstract
 Add to MetaCart
Abstract. The goal of sparse recovery is to recover the (approximately) best ksparse approximation ˆx of an ndimensional vector x from linear measurements Ax of x. We consider a variant of the problem which takes into account partial knowledge about the signal. In particular, we focus on the scenario where, after the measurements are taken, we are given a set S of size s that is supposed to contain most of the “large ” coefficients of x. The goal is then to find ˆx such that ‖x − ˆx‖p ≤ C min ksparse x ′ supp(x ′)⊆S ‖x − x ′ ‖q. (1) We refer to this formulation as the sparse recovery with partial support knowledge problem (SRPSK). We show that SRPSK can be solved, up to an approximation factor of C = 1 + ɛ, using O((k/ɛ) log(s/k)) measurements, for p = q = 2. Moreover, this bound is tight as long as s = O(ɛn / log(n/ɛ)). This completely resolves the asymptotic measurement complexity of the problem except for a very small range of the parameter s. To the best of our knowledge, this is the first variant of (1+ɛ)approximate sparse recovery for which the asymptotic measurement complexity has been determined. 1
Compressive sensing using localitypreserving matrices
, 2012
"... Compressive sensing is a method for acquiring highdimensional signals (e.g., images) using a small number of linear measurements. Consider an npixel image x ∈ R n, where each pixel p has value xp. The image is acquired by computing the measurement vector Ax, where A is an m×n measurement matrix, f ..."
Abstract
 Add to MetaCart
Compressive sensing is a method for acquiring highdimensional signals (e.g., images) using a small number of linear measurements. Consider an npixel image x ∈ R n, where each pixel p has value xp. The image is acquired by computing the measurement vector Ax, where A is an m×n measurement matrix, for some m << n. The goal is to design the matrix A and the recovery algorithm which, given Ax, returns an approximation to x. It is known that m = O(k log(n/k)) measurements suffices to recover the ksparse approximation of x. Unfortunately, this result uses matrices A that are random. Such matrices are difficult to implement in physical devices. In this paper we propose compressive sensing schemes that use matrices A that achieve the nearoptimal bound of m = O(k log n), while being highly “local”. We also show impossibility results for stronger notions of locality.
Certified by.............................................................................
, 2009
"... The general problem of obtaining a useful succinct representation (sketch) of some piece of data is ubiquitous; it has applications in signal acquisition, data compression, sublinear space algorithms, etc. In this thesis we focus on sparse recovery, where the goal is to recover sparse vectors exact ..."
Abstract
 Add to MetaCart
The general problem of obtaining a useful succinct representation (sketch) of some piece of data is ubiquitous; it has applications in signal acquisition, data compression, sublinear space algorithms, etc. In this thesis we focus on sparse recovery, where the goal is to recover sparse vectors exactly, and to approximately recover nearlysparse vectors. More precisely, from the short representation of a vector x, we want to recover a vector x ∗ such that the approximation erorr ‖x − x ∗ ‖ is comparable to the “tail ” minx ′ ‖x − x ′ ‖ where x ′ ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years, notably in areas such as data stream computing and compressed sensing. We consider two types of sketches: linear and nonlinear. For the linear sketching case, where the compressed representation of x is Ax for a measurement matrix A, we introduce a class of binary sparse matrices as valid measurement matrices. We show that they can be used with the popular geometric “ℓ1 minimization ” recovery procedure. We also present two iterative recovery algorithms, Sparse Matching Pursuit and Sequential Sparse Matching Pursuit, that can be used with the same matrices. Thanks to the sparsity of the matrices, the resulting algorithms are