Results 1  10
of
14
Compressive sensing
 IEEE Signal Processing Mag
, 2007
"... The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too m ..."
Abstract

Cited by 696 (62 self)
 Add to MetaCart
The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too many samples and must compress in order to store or transmit them. In other applications, including imaging systems (medical scanners, radars) and highspeed analogtodigital converters, increasing the sampling rate or density beyond the current stateoftheart is very expensive. In this lecture, we will learn about a new technique that tackles these issues using compressive sensing [1, 2]. We will replace the conventional sampling and reconstruction operations with a more general linear measurement scheme coupled with an optimization in order to acquire certain kinds of signals at a rate significantly below Nyquist. 2
Structured compressed sensing: From theory to applications
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract

Cited by 104 (16 self)
 Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Kronecker Compressive Sensing
"... Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional; in this case, CS works best with representations that encapsulate the structure of such signals in every dimension. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed settings can be easily expressed as Kronecker product matrices. The Kronecker product formulation in these two settings enables the derivation of analytical bounds for sparse approximation of multidimensional signals and CS recovery performance as well as a means to evaluate novel distributed measurement schemes.
Surveying and comparing simultaneous sparse approximation (or grouplasso) algorithms
"... ..."
(Show Context)
Universal measurement bounds for structured sparse signal recovery
 In Proceedings of AISTATS
"... Standard compressive sensing results state that to exactly recover an s sparse signal in Rp, one requires O(s · log p) measurements. While this bound is extremely useful in practice, often real world signals are not only sparse, but also exhibit structure in the sparsity pattern. We focus on groups ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
(Show Context)
Standard compressive sensing results state that to exactly recover an s sparse signal in Rp, one requires O(s · log p) measurements. While this bound is extremely useful in practice, often real world signals are not only sparse, but also exhibit structure in the sparsity pattern. We focus on groupstructured patterns in this paper. Under this model, groups of signal coefficients are active (or inactive) together. The groups are predefined, but the particular set of groups that are active (i.e., in the signal support) must be learned from measurements. We show that exploiting knowledge of groups can further reduce the number of measurements required for exact signal recovery, and derive universal bounds for the number of measurements needed. The bound is universal in the sense that it only depends on the number of groups under consideration, and not the particulars of the groups (e.g., compositions, sizes, extents, overlaps, etc.). Experiments show that our result holds for a variety of overlapping group configurations. 1
Convex approaches to model wavelet sparsity patterns
 in International Conference on Image Processing (ICIP
, 2011
"... Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees (HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees (HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are nonconvex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on groupsparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in deconvolution and compressed sensing applications, while being as computationally efficient as standard coefficientwise approaches such as lasso. Index Terms — wavelet modeling, deconvolution, compressed sensing 1.
Residual reconstruction for blockbased compressed sensing of video
 in Proceedings of the Data Compression Conference
, 2011
"... A simple blockbased compressedsensing reconstruction for still images is adapted to video. Incorporating reconstruction from a residual arising from motion estimation and compensation, the proposed technique alternatively reconstructs frames of the video sequence and their corresponding motion fie ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
A simple blockbased compressedsensing reconstruction for still images is adapted to video. Incorporating reconstruction from a residual arising from motion estimation and compensation, the proposed technique alternatively reconstructs frames of the video sequence and their corresponding motion fields in an iterative fashion. Experimental results reveal that the proposed technique achieves significantly higher quality than a straightforward reconstruction that applies a stillimage reconstruction independently frame by frame; a 3D reconstruction that exploits temporal correlation between frames merely in the form of a motionagnostic 3D transform; and a similar, yet noniterative, motioncompensated residual reconstruction.
Tight Measurement Bounds for Exact Recovery of Structured Sparse Signals
"... Standard compressive sensing results state that to exactly recover an s sparse signal in R p, one requires O(s·log p) measurements. While this bound is extremely useful in practice, often real world signals are not only sparse, but also exhibit structure in the sparsity pattern. We focus on groupst ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Standard compressive sensing results state that to exactly recover an s sparse signal in R p, one requires O(s·log p) measurements. While this bound is extremely useful in practice, often real world signals are not only sparse, but also exhibit structure in the sparsity pattern. We focus on groupstructured patterns in this paper. Under this model, groups of signal coefficients are active (or inactive) together. The groups are predefined, but the particular set of groups that are active (i.e., in the signal support) must be learned from measurements. We show that exploiting knowledge of groups can further reduce the number of measurements required for exact signal recovery, and derive universal bounds for the number of measurements needed. The bound is universal in the sense that it only depends on the number of groups under consideration, and not the particulars of the groups (e.g., compositions, sizes, extents, overlaps, etc.). Experiments show that our result holds for a variety of overlapping group configurations. 1
Transient acoustic signal classification using joint sparse representation
 International Conference on Acoustics, Speech and Signal Processing (ICASSP
, 2011
"... In this paper, we present a novel joint sparse representation based method for acoustic signal classification with multiple measurements. The proposed method exploits the correlations among the multiple measurements with the notion of joint sparsity for improving the classification accuracy. Exten ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we present a novel joint sparse representation based method for acoustic signal classification with multiple measurements. The proposed method exploits the correlations among the multiple measurements with the notion of joint sparsity for improving the classification accuracy. Extensive experiments are carried out on real acoustic data sets and the results are compared with the conventional discriminative classifiers in order to verify the effectiveness of the proposed method. Index Terms — Joint sparsity classification, sparse representation, joint sparse recovery 1.
unknown title
, 2010
"... Surveying and comparing simultaneous sparse approximation (or group lasso) algorithms ..."
Abstract
 Add to MetaCart
(Show Context)
Surveying and comparing simultaneous sparse approximation (or group lasso) algorithms