Results 1  10
of
50
Restricted isometries for partial random circulant matrices
 APPL. COMPUT. HARMON. ANAL
, 2010
"... In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampl ..."
Abstract

Cited by 47 (8 self)
 Add to MetaCart
(Show Context)
In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a dataacquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the sth order restricted isometry constant is small when the number m of samples satisfies m � (s log n) 3/2, where n is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.
Generalized sampling and infinitedimensional compressed sensing
"... We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finitedimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demo ..."
Abstract

Cited by 33 (20 self)
 Add to MetaCart
(Show Context)
We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finitedimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demonstrate that existing finitedimensional techniques are illsuited for solving a number of important problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. The main conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. The key to these developments is the introduction of two new concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize the fundamentally infinitedimensional reconstruction problem.
Suprema of chaos processes and the restricted isometry property
 Comm. Pure Appl. Math
"... We present a new bound for suprema of a special type of chaos processes indexed by a set of matrices, which is based on a chaining method. As applications we show significantly improved estimates for the restricted isometry constants of partial random circulant matrices and timefrequency structured ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
We present a new bound for suprema of a special type of chaos processes indexed by a set of matrices, which is based on a chaining method. As applications we show significantly improved estimates for the restricted isometry constants of partial random circulant matrices and timefrequency structured random matrices. In both cases the required condition on the number m of rows in terms of the sparsity s and the vector length n is m � s log 2 s log 2 n. Key words. Compressive sensing, restricted isometry property, structured random matrices, chaos processes, γ2functionals, generic chaining, partial random circulant matrices, random Gabor synthesis matrices.
LOWRANK MATRIX RECOVERY VIA ITERATIVELY REWEIGHTED LEAST SQUARES MINIMIZATION
"... Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowran ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowrank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best krank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. AMS subject classification: 65J22, 65K10, 52A41, 49M30. Key Words: lowrank matrix recovery, iteratively reweighted least squares, matrix completion.
Sparse Legendre expansions via ℓ1minimization
 Journal of Approximation Theory
"... We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre ssparse polynomial of maximal degree N can be recovered from m ≃ s log 4 (N) random samples that are chos ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre ssparse polynomial of maximal degree N can be recovered from m ≃ s log 4 (N) random samples that are chosen independently according to the Chebyshev probability measure dν(x) = π−1 (1 − x2) −1/2dx. As an efficient recovery method, ℓ1minimization can be used. We establish these results by verifying the restricted isometry property of a preconditioned random Legendre matrix. We then extend these results to a large class of orthogonal polynomial systems, including the Jacobi polynomials, of which the Legendre polynomials are a special case. Finally, we transpose these results into the setting of approximate recovery for functions in certain infinitedimensional function spaces.
Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing
, 2013
"... In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence
Optimal quantization for compressive sensing under message passing reconstruction
 in Proc. IEEE Int. Symp. Inf. Theory
, 2011
"... Abstract—We consider the optimal quantization of compressive sensing measurements along with estimation from quantized samples using generalized approximate message passing (GAMP). GAMP is an iterative reconstruction scheme inspired by the belief propagation algorithm on bipartite graphs which gener ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the optimal quantization of compressive sensing measurements along with estimation from quantized samples using generalized approximate message passing (GAMP). GAMP is an iterative reconstruction scheme inspired by the belief propagation algorithm on bipartite graphs which generalizes approximate message passing (AMP) for arbitrary measurement channels. Its asymptotic error performance can be accurately predicted and tracked through the state evolution formalism. We utilize these results to design meansquare optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers. I.