Results 1  10
of
17
Distilled sensing: Adaptive sampling for sparse detection and estimation
 IEEE Trans. Inform. Theory
, 2011
"... Adaptive sampling results in dramatic improvements in the recovery of sparse signals in white Gaussian noise. A sequential adaptive samplingandrefinement procedure called distilled sensing (DS) is proposed and analyzed. DS is a form of multistage experimental design and testing. Because of the ad ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
(Show Context)
Adaptive sampling results in dramatic improvements in the recovery of sparse signals in white Gaussian noise. A sequential adaptive samplingandrefinement procedure called distilled sensing (DS) is proposed and analyzed. DS is a form of multistage experimental design and testing. Because of the adaptive nature of the data collection, DS can detect and localize far weaker signals than possible from nonadaptive measurements. In particular, reliable detection and localization (support estimation) using nonadaptive samples is possible only if the signal amplitudes grow logarithmically with the problem dimension. Here it is shown that using adaptive sampling, reliable detection is possible provided the amplitude exceeds a constant, and localization is possible when the amplitude exceeds any arbitrarily slowly growing function of the dimension. 1. Introduction. In
Analyzing Graph Structure via Linear Measurements
"... We initiate the study of graph sketching, i.e., algorithms that use a limited number of linear measurements of a graph to determine the properties of the graph. While a graph on n nodes is essentially O(n 2)dimensional, we show the existence of a distribution over random projections into ddimensio ..."
Abstract

Cited by 40 (10 self)
 Add to MetaCart
We initiate the study of graph sketching, i.e., algorithms that use a limited number of linear measurements of a graph to determine the properties of the graph. While a graph on n nodes is essentially O(n 2)dimensional, we show the existence of a distribution over random projections into ddimensional “sketch ” space (d ≪ n 2) such that the relevant properties of the original graph can be inferred from the sketch with high probability. Specifically, we show that: 1. d = O(n · polylog n) suffices to evaluate properties including connectivity, kconnectivity, bipartiteness, and to return any constant approximation of the weight of the minimum spanning tree. 2. d = O(n 1+γ) suffices to compute graph sparsifiers, the exact MST, and approximate the maximum weighted matchings if we permit O(1/γ)round adaptive sketches, i.e., a sequence of projections where each projection may be chosen dependent on the outcome of earlier sketches. Our results have two main applications, both of which have the potential to give rise to fruitful lines of further research. First, our results can be thought of as giving the first compressedsensing style algorithms for graph data. Secondly, our work initiates the study of dynamic graph streams. There is already extensive literature on processing massive graphs in the datastream model. However, the existing work focuses on graphs defined by a sequence of inserted edges and does not consider edge deletions. We think this is a curious omission given the existing work on both dynamic graphs in the nonstreaming setting and dynamic geometric streaming. Our results include the first dynamic graph semistreaming algorithms for connectivity, spanning trees, sparsification, and matching problems.
Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements
, 2009
"... The recentlyproposed theory of distilled sensing establishes that adaptivity in sampling can dramatically improve the performance of sparse recovery in noisy settings. In particular, it is now known that adaptive point sampling enables the detection and/or support recovery of sparse signals that a ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
(Show Context)
The recentlyproposed theory of distilled sensing establishes that adaptivity in sampling can dramatically improve the performance of sparse recovery in noisy settings. In particular, it is now known that adaptive point sampling enables the detection and/or support recovery of sparse signals that are otherwise too weak to be recovered using any method based on nonadaptive point sampling. In this paper the theory of distilled sensing is extended to highlyundersampled regimes, as in compressive sensing. A simple adaptive samplingandrefinement procedure called compressive distilled sensing is proposed, where each step of the procedure utilizes information from previous observations to focus subsequent measurements into the proper signal subspace, resulting in a significant improvement in effective measurement SNR on the signal subspace. As a result, for the same budget of sensing resources, compressive distilled sensing can result in significantly improved error bounds compared to those for traditional compressive sensing.
On the fundamental limits of adaptive sensing
, 2011
"... Suppose we can sequentially acquire arbitrary linear measurements of an ndimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy wh ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
Suppose we can sequentially acquire arbitrary linear measurements of an ndimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy which cleverly selects the next row of A based on what has been previously observed should do far better than a nonadaptive strategy which sets the rows of A ahead of time, thus not trying to learn anything about the signal in between observations. This paper shows that the folk theorem is false. We prove that the advantages offered by clever adaptive strategies and sophisticated estimation procedures—no matter how intractable—over classical compressed acquisition/recovery schemes are, in general, minimal.
Large scale Bayesian inference and experimental design for sparse linear models
 Journal of Physics: Conference Series
"... Abstract. Many problems of lowlevel computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higherorder Bayesian decisionmaking prob ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Many problems of lowlevel computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higherorder Bayesian decisionmaking problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density’s mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, highresolution images can be approximated for the first time, solving a variational optimization problem which is convex if and only if posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for realworld magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of realworld images. Parts of this work have been presented at
Sequential analysis in highdimensional multiple testing and sparse recovery
 in Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on
"... Abstract—This paper studies the problem of highdimensional multiple testing and sparse recovery from the perspective of sequential analysis. In this setting, the probability of error is a function of the dimension of the problem. A simple sequential testing procedure for this problem is proposed. W ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Abstract—This paper studies the problem of highdimensional multiple testing and sparse recovery from the perspective of sequential analysis. In this setting, the probability of error is a function of the dimension of the problem. A simple sequential testing procedure for this problem is proposed. We derive necessary conditions for reliable recovery in the nonsequential setting and contrast them with sufficient conditions for reliable recovery using the proposed sequential testing procedure. Applications of the main results to several commonly encountered models show that sequential testing can be exponentially more sensitive to the difference between the null and alternative distributions (in terms of the dependence on dimension), implying that subtle cases can be much more reliably determined using sequential methods. I.
On the Limits of Sequential Testing in High Dimensions
, 1105
"... Abstract—This paper presents results pertaining to sequential methods for support recovery of sparse signals in noise. Specifically, we show that any sequential measurement procedure fails provided the average number of measurements per dimension grows slower then D(f0f1) −1 logs where s is the le ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents results pertaining to sequential methods for support recovery of sparse signals in noise. Specifically, we show that any sequential measurement procedure fails provided the average number of measurements per dimension grows slower then D(f0f1) −1 logs where s is the level of sparsity, and D(f0f1) the KullbackLeibler divergence between the underlying distributions. For comparison, we show any nonsequential procedure fails provided the number of measurements grows at a rate less thanD(f1f0) −1 logn, wherenis the total dimension of the problem. Lastly, we show that a simple procedure termed sequential thresholding guarantees exact support recovery provided the average number of measurements per dimension grows faster than D(f0f1) −1 (logs+loglogn), a mere additive factor more than the lower bound. I.
Random observations on random observations: Sparse signal acquisition and processing
 RICE UNIVERSITY
, 2010
"... ..."
Quickest search for a rare distribution
 in Information Sciences and Systems (CISS), 2012 46th Annual Conference on
, 2012
"... Abstract—We consider the problem of finding one sequence of independent random variables following a rare atypical distribution, P1, amongst a large number of sequences following some null distribution, P0. We quantify the number of samples needed to correctly identify one atypical sequence as the a ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the problem of finding one sequence of independent random variables following a rare atypical distribution, P1, amongst a large number of sequences following some null distribution, P0. We quantify the number of samples needed to correctly identify one atypical sequence as the atypical sequences become increasingly rare. We show that the known optimal procedure, which consists of a series of sequential probability ratio tests, succeeds with high probability provided the number of samples grows at a rate equal to a constant times π −1 D(P1P0) −1, where π is the prior probability of a sequence being atypical, and D(P1P0) is the KullbackLeibler divergence. Using techniques from sequential analysis, we show that if the number of samples grow at a rate equal to π −1 D(P1P0) −1, any procedure fails. This is then compared to sequential thresholding [1], a simple procedure which can be implemented without exact knowledge of distribution P1. We also show that the SPRT and sequential thresholding are fairly robust to our knowledge of π. Lastly, a lower bound for nonsequential procedures is derived for comparison. Index Terms—Rare events, SPRT, CUSUM procedure, sparse recovery, sequential analysis, sequential thresholding, spectrum sensing. I.
Improved Iterative Curvelet Thresholding for Compressed Sensing
"... A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing reconstruction in the fields of remote sensing. Some recent strategies including BioucasDias and Figueiredo’s twostep iteration, Beck and Teboulle’s fast method, and Osher et al’s linearized Bregman iteration are applied to iterative curvelet thresholding in order to accelerate convergence. Advantages and disadvantages of the proposed methods are studied using the socalled pseudoPareto curve in the numerical experiments on singlepixel remote sensing and Fourierdomain random imaging.