Results 1 - 10
of
263
Compressive Sensing and Structured Random Matrices
- RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
, 2011
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract
-
Cited by 157 (18 self)
- Add to MetaCart
(Show Context)
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1-minimization.
Sensing by Random Convolution
- IEEE Int. Work. on Comp. Adv. Multi-Sensor Adaptive Proc., CAMPSAP
, 2007
"... Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally efficient data acquisition strategy in that an n-dimensional signal which is S sparse in a ..."
Abstract
-
Cited by 112 (7 self)
- Add to MetaCart
(Show Context)
Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally efficient data acquisition strategy in that an n-dimensional signal which is S sparse in any fixed representation can be recovered from m � S log n measurements. We discuss two imaging scenarios — radar and Fourier optics — where convolution with a random pulse allows us to seemingly super-resolve fine-scale features, allowing us to recover high-resolution signals from low-resolution measurements. 1. Introduction. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly [7, 8, 10, 15, 32]: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements, however, are different than the samples that
Structured compressed sensing: From theory to applications
- IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract
-
Cited by 104 (16 self)
- Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Average Case Analysis of Multichannel Sparse Recovery Using Convex Relaxation
"... In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relax ..."
Abstract
-
Cited by 102 (22 self)
- Add to MetaCart
(Show Context)
In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst-case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. In this paper, our main focus is on analysis of convex relaxation techniques. In particular, we focus on the mixed ℓ2,1 approach to multichannel recovery. We show that under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. Our probability bounds are valid and meaningful even for a small number of signals. Using the tools we develop to analyze the convex relaxation method, we also tighten the previous bounds for thresholding and SOMP.
A Probabilistic and RIPless Theory of Compressed Sensing
, 2010
"... This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, ..."
Abstract
-
Cited by 95 (3 self)
- Add to MetaCart
(Show Context)
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) — they make use of a much weaker notion — or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.
Compressed Channel Sensing: A New Approach to Estimating Sparse Multipath Channels
"... High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most co ..."
Abstract
-
Cited by 87 (9 self)
- Add to MetaCart
(Show Context)
High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising of linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing, can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional least-squares-based training methods.
Non-asymptotic theory of random matrices: extreme singular values
- PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... ..."
Random sampling of sparse trigonometric polynomials
- Appl. Comput. Harm. Anal
, 2006
"... We investigate the problem of reconstructing sparse multivariate trigonometric polynomials from few randomly taken samples by Basis Pursuit and greedy algorithms such as Orthogonal Matching Pursuit (OMP) and Thresholding. While recovery by Basis Pursuit has recently been studied by several authors, ..."
Abstract
-
Cited by 75 (21 self)
- Add to MetaCart
We investigate the problem of reconstructing sparse multivariate trigonometric polynomials from few randomly taken samples by Basis Pursuit and greedy algorithms such as Orthogonal Matching Pursuit (OMP) and Thresholding. While recovery by Basis Pursuit has recently been studied by several authors, we provide theoretical results on the success probability of reconstruction via Thresholding and OMP for both a continuous and a discrete probability model for the sampling points. We present numerical experiments, which indicate that usually Basis Pursuit is significantly slower than greedy algorithms, while the recovery rates are very similar.
Image Signature: Highlighting sparse salient regions
- IEEE Transactions on Pattern Analysis and Machine Intelligence
"... Abstract—We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with ..."
Abstract
-
Cited by 68 (1 self)
- Add to MetaCart
(Show Context)
Abstract—We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods. Index Terms—Saliency, visual attention, change blindness, sign function, sparse signal analysis. 1