Results 1 - 10
of
98
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract
-
Cited by 879 (14 self)
- Add to MetaCart
(Show Context)
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ p, and the zi’s are i.i.d. N(0, σ 2). Is it possible to estimate x reliably based on the noisy data y? To estimate x, we introduce a new estimator—we call the Dantzig selector—which is solution to the ℓ1-regularization problem min ˜x∈R p ‖˜x‖ℓ1 subject to ‖A T r‖ℓ ∞ ≤ (1 + t −1) √ 2 log p · σ, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability ‖ˆx − x ‖ 2 ℓ2 ≤ C2 ( · 2 log p · σ 2 + ∑ min(x 2 i, σ 2) Our results are nonasymptotic and we give values for the constant C. In short, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables, by solving a very simple convex program, which in fact can easily be recast as a convenient linear program (LP).
FINDING STRUCTURE WITH RANDOMNESS: PROBABILISTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS
"... Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for ..."
Abstract
-
Cited by 253 (6 self)
- Add to MetaCart
(Show Context)
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition
Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals
, 2009
"... Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, alt ..."
Abstract
-
Cited by 158 (18 self)
- Add to MetaCart
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
High-Resolution Radar via Compressed Sensing
, 2008
"... A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N ×N grid. Assuming the number of targets K is small (i.e., K ≪ N 2), then we can transmit a sufficiently “incoherent ” pulse and employ the techniques of compressed sensing to reconstruct the ta ..."
Abstract
-
Cited by 153 (9 self)
- Add to MetaCart
A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N ×N grid. Assuming the number of targets K is small (i.e., K ≪ N 2), then we can transmit a sufficiently “incoherent ” pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel compressed sensing approach offers great potential for better resolution over classical radar.
Structured compressed sensing: From theory to applications
- IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract
-
Cited by 104 (16 self)
- Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Average Case Analysis of Multichannel Sparse Recovery Using Convex Relaxation
"... In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relax ..."
Abstract
-
Cited by 102 (22 self)
- Add to MetaCart
(Show Context)
In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst-case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. In this paper, our main focus is on analysis of convex relaxation techniques. In particular, we focus on the mixed ℓ2,1 approach to multichannel recovery. We show that under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. Our probability bounds are valid and meaningful even for a small number of signals. Using the tools we develop to analyze the convex relaxation method, we also tighten the previous bounds for thresholding and SOMP.
Toeplitz compressed sensing matrices with applications to sparse channel estimation
, 2010
"... Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entri ..."
Abstract
-
Cited by 93 (12 self)
- Add to MetaCart
(Show Context)
Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entries of the test vectors are independent realizations of certain zero-mean random variables, then with high probability the unknown signals can be recovered by solving a tractable convex optimization. This work extends CS theory to settings where the entries of the test vectors exhibit structured statistical dependencies. It follows that CS can be effectively utilized in linear, time-invariant system identification problems provided the impulse response of the system is (approximately or exactly) sparse. An immediate application is in wireless multipath channel estimation. It is shown here that time-domain probing of a multipath channel with a random binary sequence, along with utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional least-squares based linear channel estimation strategies. Abstract extensions of the main results are also discussed, where the theory of equitable graph coloring is employed to establish the utility of CS in settings where the test vectors exhibit more general statistical dependencies.
Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions
, 2009
"... Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing l ..."
Abstract
-
Cited by 62 (4 self)
- Add to MetaCart
(Show Context)
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques offer a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider
Compressed sensing: how sharp is the restricted isometry property?
, 2009
"... Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sens ..."
Abstract
-
Cited by 51 (7 self)
- Add to MetaCart
(Show Context)
Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sensing, a large fraction of which have focused on the design and analysis of algorithms to recover a signal from its compressed measurements. The Restricted Isometry Property (RIP) has become a ubiquitous property assumed in their analysis. We present the best known bounds on the RIP, and in the process illustrate the way in which the combinatorial nature of compressed sensing is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners.