Results 1  10
of
26
The restricted isometry property for random block diagonal matrices,” Applied and Computational Harmonic Analysis
, 2014
"... In Compressive Sensing, the Restricted Isometry Property (RIP) ensures that robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. It is by now wellknown that Gaussian (or, more generally, subGaussian) random matrices satisfy t ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
In Compressive Sensing, the Restricted Isometry Property (RIP) ensures that robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. It is by now wellknown that Gaussian (or, more generally, subGaussian) random matrices satisfy the RIP under certain conditions on the number of measurements. Their use can be limited in practice, however, due to storage limitations, computational considerations, or the mismatch of such matrices with certain measurement architectures. These issues have recently motivated considerable effort towards studying the RIP for structured random matrices. In this paper, we study the RIP for block diagonal measurement matrices where each block on the main diagonal is itself a subGaussian random matrix. Our main result states that such matrices can indeed satisfy the RIP but that the requisite number of measurements depends on certain properties of the basis in which the signals are sparse. In the best case, these matrices perform nearly as well as dense Gaussian random matrices, despite having many fewer nonzero entries.
1Sparse Recovery of Streaming Signals Using `1Homotopy
"... Most of the existing methods for sparse signal recovery assume a static system: the unknown signal is a finitelength vector for which a fixed set of linear measurements and a sparse representation basis are available and an `1norm minimization program is solved for the reconstruction. However, the ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Most of the existing methods for sparse signal recovery assume a static system: the unknown signal is a finitelength vector for which a fixed set of linear measurements and a sparse representation basis are available and an `1norm minimization program is solved for the reconstruction. However, the same representation and reconstruction framework is not readily applicable in a streaming system: the unknown signal changes over time, and it is measured and reconstructed sequentially over small time intervals. A streaming framework for the reconstruction is particularly desired when dividing a streaming signal into disjoint blocks and processing each block independently is either infeasible or inefficient. In this paper, we discuss two such streaming systems and a homotopybased algorithm for quickly solving the associated weighted `1norm minimization programs: 1) Recovery of a smooth, timevarying signal for which, instead of using block transforms, we use lapped orthogonal transforms for sparse representation. 2) Recovery of a sparse, timevarying signal that follows a linear dynamic model. For both the systems, we iteratively process measurements over a sliding interval and solve a weighted `1norm minimization problem for estimating sparse coefficients. Since we estimate overlapping portions of the streaming signal while adding and removing measurements, instead of solving a new `1 program
Tracking dynamic sparse signals using hierarchical bayesian kalman filters
 In Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP
, 2013
"... In this work we are interested in the problem of reconstructing timevarying signals for which the support is assumed to be sparse. For a single time instance it is possible to reconstruct the original signal efficiently by employing a suitable algorithm for sparse signal recovery, given the sparsi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
In this work we are interested in the problem of reconstructing timevarying signals for which the support is assumed to be sparse. For a single time instance it is possible to reconstruct the original signal efficiently by employing a suitable algorithm for sparse signal recovery, given the sparsity level of the signal. In the case of timevarying sparse signals the sparsity level is not necessarily known apriori. Furthermore conventional tracking by Kalman filtering fails to promote sparsity. Instead, a hierarchical Bayesian model is used in the tracking process which succeeds in modelling sparsity. One theorem is provided that extends previous work by providing some more general results. A second theorem gives the conditions under which all sparse signals are recovered exactly. It is demonstrated that the proposed method succeeds in recovering timevarying sparse signals with greater accuracy than the classic Kalman filter approach. Index Terms — Hierarchical Bayesian network, Kalman filter, timevarying sparse signals
Time Invariant Error Bounds for ModifiedCS based Sparse Signal Sequence Recovery
"... PAPER AWARD”. In this work, we obtain performance guarantees for modifiedCS and for its improved version, modifiedCSAddLSDel, for recursive reconstruction of sparse signal sequences from noisy measurements. Under mild assumptions, and for a realistic signal change model, we show that the suppor ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
PAPER AWARD”. In this work, we obtain performance guarantees for modifiedCS and for its improved version, modifiedCSAddLSDel, for recursive reconstruction of sparse signal sequences from noisy measurements. Under mild assumptions, and for a realistic signal change model, we show that the support recovery error of both algorithms is bounded by a timeinvariant and small value at all times. The same is also true for the reconstruction error. Under a slow support change assumption, our results hold under weaker assumptions on the number of measurements than what simple compressive sensing (basis pursuit denoising) needs. Also, the result for modifiedCSaddLSdel holds under weaker assumptions on the signal magnitude increase rate than the result for modifiedCS. Similar results were obtained in an earlier work, however the signal change model assumed there was very simple and not practically valid. I.
TRACKING SPARSE SIGNAL SEQUENCES FROM NONLINEAR/NONGAUSSIAN MEASUREMENTS AND APPLICATIONS IN ILLUMINATIONMOTION TRACKING
"... In this work, we develop algorithms for tracking time sequences of sparse spatial signals with slowly changing sparsity patterns, and other unknown states, from a sequence of nonlinear observations corrupted by (possibly) nonGaussian noise. A key example of the above problem occurs in tracking movi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this work, we develop algorithms for tracking time sequences of sparse spatial signals with slowly changing sparsity patterns, and other unknown states, from a sequence of nonlinear observations corrupted by (possibly) nonGaussian noise. A key example of the above problem occurs in tracking moving objects across spatially varying illumination changes, where motion is the small dimensional state while the illumination image is the sparse spatial signal satisfying the slowsparsitypatternchange property. Index Terms — particle filtering, compressed sensing, tracking 1.
Dynamic Compressive Sensing: SPARSE RECOVERY ALGORITHMS FOR STREAMING SIGNALS AND VIDEO
, 2013
"... ..."
Dynamic sparse state estimation using ℓ1ℓ1 minimization: Adaptiverate measurement bounds, algorithms, and applications
 in IEEE Intern. Conf. Acoustics, Speech, and Sig. Proc. (ICASSP), 2015
"... We propose a recursive algorithm for estimating timevarying signals from a few linear measurements. The signals are assumed sparse, with unknown support, and are described by a dynamical model. In each iteration, the algorithm solves an ℓ1ℓ1 minimization problem and estimates the number of measur ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose a recursive algorithm for estimating timevarying signals from a few linear measurements. The signals are assumed sparse, with unknown support, and are described by a dynamical model. In each iteration, the algorithm solves an ℓ1ℓ1 minimization problem and estimates the number of measurements that it has to take at the next iteration. These estimates are computed based on recent theoretical results for ℓ1ℓ1 minimization. We also provide sufficient conditions for perfect signal reconstruction at each time instant as a function of an algorithm parameter. The algorithm exhibits high performance in compressive tracking on a real video sequence, as shown in our experimental results. Index Terms — State estimation, sparsity, background subtraction, motion estimation, online algorithms
Compressed Sensing With Side Information: Geometrical Interpretation and Performance Bounds
"... ar ..."
(Show Context)
Sparse Linear Dynamical System with Its Application in Multivariate Clinical Time Series
"... Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning multivariate time series. However, in general, it is difficult to set the dimension of its hidden state space. A small number of hidden states may not be able to model the complexities of a time series, whi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning multivariate time series. However, in general, it is difficult to set the dimension of its hidden state space. A small number of hidden states may not be able to model the complexities of a time series, while a large number of hidden states can lead to overfitting. In this paper, we study methods that impose an `1 regularization on the transition matrix of an LDS model to alleviate the problem of choosing the optimal number of hidden states. We incorporate a generalized gradient descent method into the Maximum a Posteriori (MAP) framework and use Expectation Maximization (EM) to iteratively achieve sparsity on the transition matrix of an LDS model. We show that our Sparse Linear Dynamical System (SLDS) improves the predictive performance when compared to ordinary LDS on a multivariate clinical time series dataset. 1
Deep predictive coding networks
 In Workshop at International Conference on Learning Representations (ICLR2013
, 2013
"... ar ..."
(Show Context)