Results 1  10
of
40
ModifiedCS: Modifying compressive sensing for problems with partially known support
 in Proc. IEEE Int. Symp. Inf. Theory (ISIT), 2009
"... Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
Abstract—We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known ” part of the support, denoted, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the “known ” part. The idea of our proposed solution (modifiedCS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of. We obtain sufficient conditions for exact reconstruction using modifiedCS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called regularized modifiedCS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown. Index Terms—Compressive sensing, modifiedCS, partially known support, prior knowledge, sparse reconstruction.
LSCSresidual (LSCS): Compressive sensing on the least squares residual
 IEEE TSP
"... Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and timevarying sparsity patterns) from a limited number of noisy linear measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Abstract—We consider the problem of recursively and causally reconstructing time sequences of sparse signals (with unknown and timevarying sparsity patterns) from a limited number of noisy linear measurements. The sparsity pattern is assumed to change slowly with time. The key idea of our proposed solution, LSCSresidual (LSCS), is to replace compressed sensing (CS) on the observation by CS on the least squares (LS) residual computed using the previous estimate of the support. We bound CSresidual error and show that when the number of available measurements is small, the bound is much smaller than that on CS error if the sparsity pattern changes slowly enough. Most importantly, under fairly mild assumptions, we show “stability ” of LSCS over time for a signal model that allows support additions and removals, and that allows coefficients to gradually increase (decrease) until they reach a constant value (become zero). By “stability, ” we mean that the number of misses and extras in the support estimate remain bounded by timeinvariant values (in turn implying a timeinvariant bound on LSCS error). Numerical experiments, and a dynamic MRI example, backing our claims are shown. Index Terms—Compressive sensing, least squares, recursive reconstruction, sparse reconstructions. I.
Tracking and smoothing of timevarying sparse signals via approximate belief propagation,” Asilomar Conf
, 2010
"... Abstract—This paper considers the problem of recovering timevarying sparse signals from dramatically undersampled measurements. A probabilistic signal model is presented that describes two common traits of timevarying sparse signals: a support set that changes slowly over time, and amplitudes that ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Abstract—This paper considers the problem of recovering timevarying sparse signals from dramatically undersampled measurements. A probabilistic signal model is presented that describes two common traits of timevarying sparse signals: a support set that changes slowly over time, and amplitudes that evolve smoothly in time. An algorithm for recovering signals that exhibit these traits is then described. Built on the belief propagation framework, the algorithm leverages recently developed approximate message passing techniques to perform rapid and accurate estimation. The algorithm is capable of performing both causal tracking and noncausal smoothing to enable both online and offline processing of sparse time series, with a complexity that is linear in all problem dimensions. Simulation results illustrate the performance gains obtained through exploiting the temporal correlation of the time series relative to independent recoveries. I.
Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning
 IEEE J. Sel. Topics Signal Process
, 2011
"... Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades signific ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this work, we propose a block sparse Bayesian learning framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less rowsparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.
Compressive acquisition of dynamic scenes
 in Euro. Conf. Comp. Vision
, 2010
"... Abstract. Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive vid ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Abstract. Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the lowdimensional dynamic parameters (the state sequence) and highdimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to lower the compressive measurement rate considerably. We validate our approach with a range of experiments involving both video recovery, sensing hyperspectral data, and classification of dynamic scenes from compressive data. Together, these applications demonstrate the effectiveness of the approach.
COMPRESSED SENSING OF TIMEVARYING SIGNALS
"... Compressed sensing (CS) lowers the number of measurements required for reconstruction and estimation of signals that are sparse when expanded over a proper basis. Traditional CS approaches deal with timeinvariant sparse signals, meaning that, during the measurement process, the signal of interest d ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Compressed sensing (CS) lowers the number of measurements required for reconstruction and estimation of signals that are sparse when expanded over a proper basis. Traditional CS approaches deal with timeinvariant sparse signals, meaning that, during the measurement process, the signal of interest does not exhibit variations. However, many signals encountered in practice are varying with time as the observation window increases (e.g., video imaging, where the signal is sparse and varies between different frames). The present paper develops CS algorithms for timevarying signals, based on the leastabsolute shrinkage and selection operator (Lasso) that has been popular for sparse regression problems. The Lasso here is tailored for smoothing timevarying signals, which are modeled as vector valued discrete time series. Two algorithms are proposed: the GroupFused Lasso, when the unknown signal support is timeinvariant but signal samples are allowed to vary with time; and the Dynamic Lasso, for the general class of signals with timevarying amplitudes and support. Performance of these algorithms is compared with a sparsityunaware Kalman smoother, a supportaware Kalman smoother, and the standard Lasso which does not account for time variations. The numerical results amply demonstrate the practical merits of the novel CS algorithms.
Distributed compressed video sensing
 in Proc. of IEEE International Conference on Image Processing,Nov
"... This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) – a solution for Distributed Video Coding (DVC) based on the recently emerging Compressed Sensing theory. The DISCOS framework compressively samples each video frame independently at the encoder. However, it ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This paper proposes a novel framework called Distributed Compressed Video Sensing (DISCOS) – a solution for Distributed Video Coding (DVC) based on the recently emerging Compressed Sensing theory. The DISCOS framework compressively samples each video frame independently at the encoder. However, it recovers video frames jointly at the decoder by exploiting an interframe sparsity model and by performing sparse recovery with side information. In particular, along with global framebased measurements, the DISCOS encoder also acquires local blockbased measurements for block prediction at the decoder. Our interframe sparsity model mimics stateoftheart video codecs: the sparsest representation of a block is a linear combination of a few temporal neighboring blocks that are in previously reconstructed frames or in nearby key frames. This model enables a block to be optimally predicted from its local measurements by l1minimization. The DISCOS decoder also employs a sparse recovery with side information to jointly reconstruct a frame from its global measurements and its local blockbased prediction. Simulation results show that the proposed framework outperforms the baseline compressed sensingbased scheme of intraframecoding and intraframedecoding by 8 − 10dB. Finally, unlike conventional DVC schemes, our DISCOS framework can perform most encoding operations in the analog domain with very lowcomplexity, making it be a promising candidate for realtime, practical applications where the analog to digital conversion is expensive, e.g., in Terahertz imaging. Index Terms — distributed video coding, WynerZiv coding, compressed sensing, compressive sensing, sparse recovery with decoder side information, structurally random matrices. 1.
Analyzing least squares and kalman filtered compressed sensing,” in long version, available at http://www.ece.iastate.edu/ ∼namrata/ AnalyzeKFCS.pdf
, 2008
"... In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow timevarying sparsity patterns, from a limited number of linear “incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KFCS). The ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow timevarying sparsity patterns, from a limited number of linear “incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KFCS). The key idea is to run a reduced order KF only for the current signal’s estimated nonzero coefficients’ set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting algorithm LSCS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KFCS (or LSCS) estimate converges to that of a genieaided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.
LassoKalman smoother for tracking sparse signals
 in Asilomar Conf. on Signals, Systems and Computers 2009
, 2009
"... Abstract—Fixedinterval smoothing of timevarying vector processes is an estimation approach with welldocumented merits for tracking applications. The optimal performance in the linear GaussMarkov model is achieved by the Kalman smoother (KS), which also admits an efficient recursive implementatio ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract—Fixedinterval smoothing of timevarying vector processes is an estimation approach with welldocumented merits for tracking applications. The optimal performance in the linear GaussMarkov model is achieved by the Kalman smoother (KS), which also admits an efficient recursive implementation. The present paper deals with vector processes for which it is known a priori that many of their entries equal to zero. In this context, the process to be tracked is sparse, and the performance of sparsityagnostic KS schemes degrades considerably. On the other hand, it is shown here that a sparsityaware KS exhibits complexity which grows exponentially in the vector dimension. To obtain a tractable alternative, the KS cost is regularized with the sparsitypromoting ℓ1 norm of the vector process – a relaxation also used in linear regression problems to obtain the leastabsolute shrinkage and selection operator (Lasso). The Lasso (L)KS derived in this work is not only capable of tracking sparse timevarying vector processes, but can also afford an efficient recursive implementation based on the alternating direction method of multipliers (ADMoM). Finally, a weighted (W)LKS is also introduced to cope with the bias of the LKS, and simulations are provided to validate the performance of the novel algorithms. I.
Sparsity Penalties in Dynamical System Estimation
"... Abstract—In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman filter as a onestep update optimization procedure which leads us to a more unified framework, useful f ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract—In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman filter as a onestep update optimization procedure which leads us to a more unified framework, useful for incorporating sparsity constraints. We introduce three combinations of two sparsity conditions (sparsity in the state and sparsity in the innovations) and write recursive optimization programs to estimate the state for each model. This paper is meant as an overview of different methods for incorporating sparsity into the dynamic model, a presentation of algorithms that unify the support and coefficient estimation, and a demonstration that these suboptimal schemes can actually show some performance improvements (either in estimation error or convergence time) over standard optimal methods that use an impoverished model.