Results 1  10
of
109
Compressive sensing
 IEEE Signal Processing Mag
, 2007
"... The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too m ..."
Abstract

Cited by 305 (40 self)
 Add to MetaCart
The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too many samples and must compress in order to store or transmit them. In other applications, including imaging systems (medical scanners, radars) and highspeed analogtodigital converters, increasing the sampling rate or density beyond the current stateoftheart is very expensive. In this lecture, we will learn about a new technique that tackles these issues using compressive sensing [1, 2]. We will replace the conventional sampling and reconstruction operations with a more general linear measurement scheme coupled with an optimization in order to acquire certain kinds of signals at a rate significantly below Nyquist. 2
Blocksparse signals: Uncertainty relations and efficient recovery
 IEEE Trans. Signal Process
, 2010
"... Abstract—We consider efficient methods for the recovery of blocksparse signals—i.e., sparse signals that have nonzero entries occurring in clusters—from an underdetermined system of linear equations. An uncertainty relation for blocksparse signals is derived, based on a blockcoherence measure, wh ..."
Abstract

Cited by 51 (13 self)
 Add to MetaCart
Abstract—We consider efficient methods for the recovery of blocksparse signals—i.e., sparse signals that have nonzero entries occurring in clusters—from an underdetermined system of linear equations. An uncertainty relation for blocksparse signals is derived, based on a blockcoherence measure, which we introduce. We then show that a blockversion of the orthogonal matching pursuit algorithm recovers block ksparse signals in no more than k steps if the blockcoherence is sufficiently small. The same condition on blockcoherence is shown to guarantee successful recovery through a mixed `2=`1optimization approach. This complements previous recovery results for the blocksparse case which relied on small blockrestricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of blocksparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem. Index Terms—Basis pursuit, blocksparsity, compressed sensing, matching pursuit. I.
Compressed Sensing of Analog Signals in ShiftInvariant Spaces
, 2009
"... A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that on ..."
Abstract

Cited by 50 (33 self)
 Add to MetaCart
A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for lowrate sampling of continuoustime sparse signals in shiftinvariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finitelength vectors, we consider sampling of analog signals for which no underlying finitedimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.
NonParametric Bayesian Dictionary Learning for Sparse Image Representations
"... Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers ..."
Abstract

Cited by 37 (24 self)
 Add to MetaCart
Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stickbreaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other stateoftheart approaches.
Compressed Sensing of BlockSparse Signals: Uncertainty Relations and Efficient Recovery
, 2009
"... We consider compressed sensing of blocksparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. An uncertainty relation for blocksparse signals is derived, based on a blockcoherence measure, which we introduce. We then show that a blockversion of the orthogonal ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
We consider compressed sensing of blocksparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. An uncertainty relation for blocksparse signals is derived, based on a blockcoherence measure, which we introduce. We then show that a blockversion of the orthogonal matching pursuit algorithm recovers block ksparse signals in no more than k steps if the blockcoherence is sufficiently small. The same condition on blockcoherence is shown to guarantee successful recovery through a mixed ℓ2/ℓ1optimization approach. This complements previous recovery results for the blocksparse case which relied on small blockrestricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of blocksparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Democracy in Action: Quantization, Saturation, and Compressive Sensing
"... Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquis ..."
Abstract

Cited by 23 (15 self)
 Add to MetaCart
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquist sampling for signals, images, and other data. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democractic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we rethink how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional ShannonNyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate that a CS system achieves the best performance when it operates at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.
Time Delay Estimation from Low Rate Samples: A Union of Subspaces Approach
, 2010
"... Time delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Previous methods for time delay recovery either operate on the analog received signal, or require sampling at the Nyquist rate of the transmitted pulse. In ..."
Abstract

Cited by 23 (21 self)
 Add to MetaCart
Time delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Previous methods for time delay recovery either operate on the analog received signal, or require sampling at the Nyquist rate of the transmitted pulse. In this paper, we develop a unified approach to time delay estimation from low rate samples. This problem can be formulated in the broader context of sampling over an infinite union of subspaces. Although sampling over unions of subspaces has been receiving growing interest, previous results either focus on unions of finitedimensional subspaces, or finite unions. The framework we develop here leads to perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses, and allows for a variety of different sampling methods. The sampling rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. This result can be viewed as a sampling theorem over an infinite union of infinite dimensional subspaces. By properly manipulating the lowrate samples, we show that the time delays can be recovered using the wellknown ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation, we develop sufficient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate.
Multichannel sampling of pulse streams at the rate of innovation
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... We consider minimalrate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We consider minimalrate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on subNyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimalrate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches.
CoherenceBased Performance Guarantees for Estimating a Sparse Vector Under Random Noise
"... We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orth ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orthogonal matching pursuit (OMP), and thresholding. These algorithms are shown to achieve nearoracle performance with high probability, assuming that x0 is sufficiently sparse. Our results are nonasymptotic and are based only on the coherence of A, so that they are applicable to arbitrary dictionaries. Differences in the precise conditions required for the performance guarantees of each algorithm are manifested in the observed performance at high and low signaltonoise ratios. This provides insight on the advantages and drawbacks of ℓ1 relaxation techniques such as BPDN as opposed to greedy approaches such as OMP and thresholding.
Structured compressed sensing: From theory to applications
 IEEE Trans. Signal Process
, 2011
"... Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications. Index Terms—Approximation algorithms, compressed sensing, compression algorithms, data acquisition, data compression, sampling methods. I.