Results 1  10
of
18
Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements
, 2009
"... The recentlyproposed theory of distilled sensing establishes that adaptivity in sampling can dramatically improve the performance of sparse recovery in noisy settings. In particular, it is now known that adaptive point sampling enables the detection and/or support recovery of sparse signals that a ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
(Show Context)
The recentlyproposed theory of distilled sensing establishes that adaptivity in sampling can dramatically improve the performance of sparse recovery in noisy settings. In particular, it is now known that adaptive point sampling enables the detection and/or support recovery of sparse signals that are otherwise too weak to be recovered using any method based on nonadaptive point sampling. In this paper the theory of distilled sensing is extended to highlyundersampled regimes, as in compressive sensing. A simple adaptive samplingandrefinement procedure called compressive distilled sensing is proposed, where each step of the procedure utilizes information from previous observations to focus subsequent measurements into the proper signal subspace, resulting in a significant improvement in effective measurement SNR on the signal subspace. As a result, for the same budget of sensing resources, compressive distilled sensing can result in significantly improved error bounds compared to those for traditional compressive sensing.
On the fundamental limits of adaptive sensing
, 2011
"... Suppose we can sequentially acquire arbitrary linear measurements of an ndimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy wh ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
Suppose we can sequentially acquire arbitrary linear measurements of an ndimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy which cleverly selects the next row of A based on what has been previously observed should do far better than a nonadaptive strategy which sets the rows of A ahead of time, thus not trying to learn anything about the signal in between observations. This paper shows that the folk theorem is false. We prove that the advantages offered by clever adaptive strategies and sophisticated estimation procedures—no matter how intractable—over classical compressed acquisition/recovery schemes are, in general, minimal.
NearOptimal Adaptive Compressed Sensing
, 1306
"... Abstract—This paper proposes a simple adaptive sensing and group testing algorithm for sparse signal recovery. The algorithm, termed Compressive Adaptive Sense and Search (CASS), is shown to be nearoptimal in that it succeeds at the lowest possible signaltonoiseratio (SNR) levels. Like tradition ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper proposes a simple adaptive sensing and group testing algorithm for sparse signal recovery. The algorithm, termed Compressive Adaptive Sense and Search (CASS), is shown to be nearoptimal in that it succeeds at the lowest possible signaltonoiseratio (SNR) levels. Like traditional compressed sensing based on random nonadaptive design matrices, the CASS algorithm requires only k log n measurements to recover a ksparse signal of dimension n. However,CASSsucceedsatSNR levels that are a factor log n less than required by standard compressed sensing. From the point of view of constructing and implementing the sensing operation as well as computing the reconstruction, the proposed algorithm is substantially less computationally intensive than standard compressed sensing. CASS is also demonstrated to perform considerably better in practice through simulation. To the best of our knowledge, this is the first demonstration of an adaptive compressed sensing algorithm with nearoptimal theoretical guarantees and excellent practical performance. This paper also shows that methods like compressed sensing, group testing, and pooling have an advantage beyond simply reducing the number of measurements or tests – adaptive versions of such methods can also improve detection and estimation performance when compared to nonadaptive direct (uncompressed) sensing. I.
On the Power of Adaptivity in Sparse Recovery
, 2011
"... The goal of (stable) sparse recovery is to recover a ksparse approximation x ∗ of a vector x from linear measurements of x. Specifically, the goal is to recover x ∗ such that ‖x − x ∗ ‖ ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
The goal of (stable) sparse recovery is to recover a ksparse approximation x ∗ of a vector x from linear measurements of x. Specifically, the goal is to recover x ∗ such that ‖x − x ∗ ‖
Random observations on random observations: Sparse signal acquisition and processing
 RICE UNIVERSITY
, 2010
"... ..."
Compressive binary search
 In Proc. IEEE Int. Symp. Inform. Theory (ISIT
, 2012
"... Abstract—In this paper we consider the problem of locating a nonzero entry in a highdimensional vector from possibly adaptive linear measurements. We consider a recursive bisection method which we dub the compressive binary search and show that it improves on what any nonadaptive method can achieve ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper we consider the problem of locating a nonzero entry in a highdimensional vector from possibly adaptive linear measurements. We consider a recursive bisection method which we dub the compressive binary search and show that it improves on what any nonadaptive method can achieve. We also establish a nonasymptotic lower bound that applies to all methods, regardless of their computational complexity. Combined, these results show that the compressive binary search is within a double logarithmic factor of the optimal performance. I.
Sequential adaptive compressed sampling via huffman codes
, 2008
"... Abstract. In this paper we introduce an information theoretic approach and use techniques from the theory of Huffman codes to construct a sequence of binary sampling vectors to determine a sparse signal. Unlike the standard approaches, ours is adaptive in the sense that each sampling vector depends ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we introduce an information theoretic approach and use techniques from the theory of Huffman codes to construct a sequence of binary sampling vectors to determine a sparse signal. Unlike the standard approaches, ours is adaptive in the sense that each sampling vector depends on the previous sample results. We prove that the expected total cost (number of measurements and reconstruction combined) we need for an ssparse vector in R n is no more than s log n + 2s. 1.
On the Fundamental Limits of Recovering Tree Sparse Vectors from Noisy Linear Measurements
, 2013
"... ..."
Lower Bounds for Adaptive Sparse Recovery
"... We give lower bounds for the problem of stable sparse recovery from adaptive linear measurements. In this problem, one would like to estimate a vector x ∈ Rn from m linear measurements A1x,..., Amx. One may choose each vector Ai based on A1x,..., Ai−1x, and must output ˆx satisfying ‖ˆx − x‖p ≤ (1 + ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We give lower bounds for the problem of stable sparse recovery from adaptive linear measurements. In this problem, one would like to estimate a vector x ∈ Rn from m linear measurements A1x,..., Amx. One may choose each vector Ai based on A1x,..., Ai−1x, and must output ˆx satisfying ‖ˆx − x‖p ≤ (1 + ɛ) min ksparse x ′ ‖x − x ′ ‖ p with probability at least 1 − δ> 2/3, for some p ∈ {1, 2}. For p = 2, it was recently shown that this is possible with m = O ( 1 1 k log log(n/k)), while nonadaptively it requires Θ ( k log(n/k)). ɛ ɛ It is also known that even adaptively, it takes m = Ω(k/ɛ) for p = 2. For p = 1, there is a nonadaptive upper bound of Õ ( 1 √ k log n). We show:
Efficient Algorithms for Robust Onebit Compressive Sensing
"... While the conventional compressive sensing assumes measurements of infinite precision, onebit compressive sensing considers an extreme setting where each measurement is quantized to just a single bit. In this paper, we study the vector recovery problem from noisy onebit measurements, and develop ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
While the conventional compressive sensing assumes measurements of infinite precision, onebit compressive sensing considers an extreme setting where each measurement is quantized to just a single bit. In this paper, we study the vector recovery problem from noisy onebit measurements, and develop two novel algorithms with formal theoretical guarantees. First, we propose a passive algorithm, which is very efficient in the sense it only needs to solve a convex optimization problem that has a closedform solution. Despite the apparent simplicity, our theoretical analysis reveals that the proposed algorithm can recover both the exactly sparse and the approximately sparse vectors. In particular, for a sparse vector with s nonzero elements, the sample complexity is O(s log n/ǫ2), where n is the dimensionality and ǫ is the recovery error. This result improves significantly over the previously best known sample complexity in the noisy setting, which is O(s log n/ǫ4). Second, in the case that the noise model is known, we develop an adaptive algorithm based on the principle of active learning. The key idea is to solicit the sign information only when it cannot be inferred from the current estimator. Compared with the passive algorithm, the adaptive one has a lower sample complexity if a highprecision solution is desired.