Results 1  10
of
126
Signal Processing with Compressive Measurements
, 2009
"... The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquistrate samples. Interestingly, it has been sh ..."
Abstract

Cited by 102 (25 self)
 Add to MetaCart
(Show Context)
The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquistrate samples. Interestingly, it has been shown that random projections are a nearoptimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems—such as detection, classification, or estimation—and filtering problems using only compressive measurements and without ever reconstructing the signals involved. We provide theoretical bounds along with experimental results.
Compressive Acquisition of Dynamic Scenes
"... Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquis ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
(Show Context)
Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models infeasible. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, from which the image frames are then reconstructed. We exploit the lowdimensional dynamic parameters (the state sequence) and highdimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to considerably lower the compressive measurement rate considerably. We validate our approach with a range of experiments including classification experiments that highlight the effectiveness of the proposed approach.
LSCSresidual (LSCS): Compressive sensing on the least squares residual
 IEEE TSP
"... Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and timevarying sparsity patterns) from a limited number of noisy linear measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed ..."
Abstract

Cited by 32 (16 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and timevarying sparsity patterns) from a limited number of noisy linear measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed solution, LSCSresidual (LSCS), is to replace compressed sensing (CS) on the observation by CS on the least squares (LS) residual computed using the previous estimate of the support. We bound CSresidual error and show that when the number of available measurements is small, the bound is much smaller than that on CS error if the sparsity pattern changes slowly enough. Most importantly, under fairly mild assumptions, we show “stability ” of LSCS over time for a signal model that allows support additions and removals, and that allows coefficients to gradually increase (decrease) until they reach a constant value (become zero). By “stability, ” we mean that the number of misses and extras in the support estimate remain bounded by timeinvariant values (in turn implying a timeinvariant bound on LSCS error). Numerical experiments, and a dynamic MRI example, backing our claims are shown. Index Terms—Compressive sensing, least squares, recursive reconstruction, sparse reconstructions. I.
Tracking and smoothing of timevarying sparse signals via approximate belief propagation,” Asilomar Conf
, 2010
"... Abstract—This paper considers the problem of recovering timevarying sparse signals from dramatically undersampled measurements. A probabilistic signal model is presented that describes two common traits of timevarying sparse signals: a support set that changes slowly over time, and amplitudes that ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
Abstract—This paper considers the problem of recovering timevarying sparse signals from dramatically undersampled measurements. A probabilistic signal model is presented that describes two common traits of timevarying sparse signals: a support set that changes slowly over time, and amplitudes that evolve smoothly in time. An algorithm for recovering signals that exhibit these traits is then described. Built on the belief propagation framework, the algorithm leverages recently developed approximate message passing techniques to perform rapid and accurate estimation. The algorithm is capable of performing both causal tracking and noncausal smoothing to enable both online and offline processing of sparse time series, with a complexity that is linear in all problem dimensions. Simulation results illustrate the performance gains obtained through exploiting the temporal correlation of the time series relative to independent recoveries. I.
A Short Note on Compressed Sensing with Partially Known Signal Support
, 2010
"... This short note studies a variation of the Compressed Sensing paradigm introduced recently by Vaswani et al., i.e. the recovery of sparse signals from a certain number of linear measurements when the signal support is partially known. The reconstruction method is based on a convex minimization progr ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
This short note studies a variation of the Compressed Sensing paradigm introduced recently by Vaswani et al., i.e. the recovery of sparse signals from a certain number of linear measurements when the signal support is partially known. The reconstruction method is based on a convex minimization program coined innovative Basis Pursuit DeNoise (or i BPDN). Under the common ℓ2fidelity constraint made on the available measurements, this optimization promotes the (ℓ1) sparsity of the candidate signal over the complement of this known part. In particular, this paper extends the results of Vaswani et al. to the cases of compressible signals and noisy measurements. Our proof relies on a small adaption of the results of Candes in 2008 for characterizing the stability of the Basis Pursuit DeNoise (BPDN) program. We emphasize also an interesting link between our method and the recent work of Davenport et al. on the δstable embeddings and the cancelthenrecover strategy applied to our problem. For both approaches, reconstructions are indeed stabilized when the sensing matrix respects the Restricted Isometry Property for the same sparsity order. We conclude by sketching an easy numerical method relying on monotone operator splitting and proximal methods that iteratively solves i BPDN.
Recovery of sparsely corrupted signals
 IEEE Trans. Inf. Theory
, 2012
"... Abstract—We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, su ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incomplete) dictionary that are corrupted by additive noise admitting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as image inpainting, superresolution, signal separation, and recovery of signals that are impaired by, e.g., clipping, impulse noise, or narrowband interference. We present deterministic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and we provide corresponding practicable recovery algorithms. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence parameters of the involved dictionaries, and on the amount of prior knowledge about the signal and noise support sets. Index Terms—Uncertainty relations, signal restoration, signal separation, coherencebased recovery guarantees, `1norm minimization, greedy algorithms. I.
Realtime robust principal components’ pursuit
 in Allerton Conf. on Communications, Control and Computing
, 2010
"... Abstract—In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequenc ..."
Abstract

Cited by 23 (18 self)
 Add to MetaCart
(Show Context)
Abstract—In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part. Each image frame is treated as a column vector of the data matrix made up of a low rank matrix and a sparse corruption matrix. Principal component pursuit solves the problem under the assumptions that the singular vectors of the low rank matrix are spread out and the sparsity pattern of the sparse matrix is uniformly random. However, in practice, usually the sparsity pattern and the signal values of the sparse part (moving persons/objects) change in a correlated fashion over time, for e.g., the object moves slowly and/or with roughly constant velocity. This will often result in a low rank sparse matrix. For video surveillance applications, it would be much more useful to have a realtime solution. In this work, we study the online version of the above problem and propose a solution that automatically handles correlated sparse outliers. In fact we also discuss how we can potentially use the correlation to our advantage in future work. The key idea of this work is as follows. Given an initial estimate of the principal directions of the low rank part, we causally keep estimating the sparse part at each time by solving a noisy compressive sensing type problem. The principal directions of the low rank part are updated everysooften. In between two update times, if new Principal Components’ directions appear, the “noise ” seen by the Compressive Sensing step may increase. This problem is solved, in part, by utilizing the time correlation model of the low rank part. We call the proposed solution “Realtime Robust Principal Components ’ Pursuit”. It still requires the singular vectors of the low rank part to be spread out, but it does not require i.i.d.ness of either the sparse part or the low rank part. I.
Compressive MUSIC: revisiting the link between compressive sensing and array signal processing
 IEEE Trans. on Information Theory
, 2012
"... Abstract—The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sen ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimalbound with finite number of snapshots even in cases where the signals are linearly dependent. Index Terms—Compressive sensing, multiple measurement vector problem, joint sparsity, MUSIC, SOMP, thresholding. I.
Recursive robust pca or recursive sparse recovery in large but structured noise
 in IEEE Intl. Symp. on Information Theory (ISIT
, 2013
"... This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more informati ..."
Abstract

Cited by 22 (17 self)
 Add to MetaCart
(Show Context)
This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more information, please contact
Recursive sparse recovery in large but correlated noise
 in Proc. 49th Allerton Conf. Commun. Control Comput
, 2011
"... Abstract—In this work, we focus on the problem of recursively recovering a time sequence of sparse signals, with timevarying sparsity patterns, from highly undersampled measurements corrupted by very large but correlated noise. It is assumed that the noise is correlated enough to have an approxima ..."
Abstract

Cited by 20 (13 self)
 Add to MetaCart
(Show Context)
Abstract—In this work, we focus on the problem of recursively recovering a time sequence of sparse signals, with timevarying sparsity patterns, from highly undersampled measurements corrupted by very large but correlated noise. It is assumed that the noise is correlated enough to have an approximately low rank covariance matrix that is either constant, or changes slowly, with time. We show how our recently introduced Recursive Projected CS (ReProCS) and modifiedReProCS ideas can be used to solve this problem very effectively. To the best of our knowledge, except for the recent work of dense error correction via ℓ1 minimization, which can handle another kind of large but “structured ” noise (the noise needs to be sparse), none of the other works in sparse recovery have studied the case of any other kind of large noise. I.