Results 1  10
of
57
Compressed sensing
 IEEE Trans. Inform. Theory
"... Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measureme ..."
Abstract

Cited by 1702 (18 self)
 Add to MetaCart
Abstract—Suppose is an unknown vector in (a digital image or signal); we plan to measure general linear functionals of and then reconstruct. If is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements can be dramatically smaller than the size. Thus, certain natural classes of images with pixels need only = ( 1 4 log 5 2 ()) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual pixel samples. More specifically, suppose has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)—so the coefficients belong to an ball for 0 1. The most important coefficients in that expansion allow reconstruction with 2 error ( 1 2 1
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 824 (16 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
A Simple Proof of the Restricted Isometry Property for Random Matrices
 CONSTR APPROX
, 2008
"... We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main ingredients: (i) concentration inequalities for random inner products that have recently provided algorithmical ..."
Abstract

Cited by 296 (56 self)
 Add to MetaCart
We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main ingredients: (i) concentration inequalities for random inner products that have recently provided algorithmically simple proofs of the Johnson–Lindenstrauss lemma; and (ii) covering numbers for finitedimensional balls in Euclidean space. This leads to an elementary proof of the Restricted Isometry Property and brings out connections between Compressed Sensing and the Johnson–Lindenstrauss lemma. As a result, we obtain simple and direct proofs of Kashin’s theorems on widths of finite balls in Euclidean space (and their improvements due to Gluskin) and proofs of the existence of optimal Compressed Sensing measurement matrices. In the process, we also prove that these measurements have a certain universality with respect to the sparsityinducing basis.
Compressed sensing and best kterm approximation
 J. Amer. Math. Soc
, 2009
"... Compressed sensing is a new concept in signal processing where one seeks to minimize the number of measurements to be taken from signals while still retaining the information necessary to approximate them well. The ideas have their origins in certain abstract results from functional analysis and app ..."
Abstract

Cited by 146 (11 self)
 Add to MetaCart
Compressed sensing is a new concept in signal processing where one seeks to minimize the number of measurements to be taken from signals while still retaining the information necessary to approximate them well. The ideas have their origins in certain abstract results from functional analysis and approximation theory by Kashin [23] but were recently brought into the forefront by the work of Candès, Romberg and Tao [7, 5, 6] and Donoho [9] who constructed concrete algorithms and showed their promise in application. There remain several fundamental questions on both the theoretical and practical side of compressed sensing. This paper is primarily concerned about one of these theoretical issues revolving around just how well compressed sensing can approximate a given signal from a given budget of fixed linear measurements, as compared to adaptive linear measurements. More precisely, we consider discrete signals x ∈ IR N, allocate n < N linear measurements of x, and we describe the range of k for which these measurements encode enough information to recover x in the sense of ℓp to the accuracy of best kterm approximation. We also consider the problem of having such accuracy only with high probability.
Iterative hard thresholding for compressed sensing
 Appl. Comp. Harm. Anal
"... Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery probl ..."
Abstract

Cited by 136 (13 self)
 Add to MetaCart
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives nearoptimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
An Elementary Introduction to Modern Convex Geometry
 in Flavors of Geometry
, 1997
"... Introduction to Modern Convex Geometry KEITH BALL Contents Preface 1 Lecture 1. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8 Lecture 3. Fritz John's Theorem 13 Lecture 4. Volume Ratios and Spherical Sections of the Octahedron 19 Lecture 5. The BrunnMinkowski Inequality and Its Ext ..."
Abstract

Cited by 98 (2 self)
 Add to MetaCart
Introduction to Modern Convex Geometry KEITH BALL Contents Preface 1 Lecture 1. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8 Lecture 3. Fritz John's Theorem 13 Lecture 4. Volume Ratios and Spherical Sections of the Octahedron 19 Lecture 5. The BrunnMinkowski Inequality and Its Extensions 25 Lecture 6. Convolutions and Volume Ratios: The Reverse Isoperimetric Problem 32 Lecture 7. The Central Limit Theorem and Large Deviation Inequalities 37 Lecture 8. Concentration of Measure in Geometry 41 Lecture 9. Dvoretzky's Theorem 47 Acknowledgements 53 References 53 Index 55 Preface These notes are based, somewhat loosely, on three series of lectures given by myself, J. Lindenstrauss and G. Schechtman, during the Introductory Workshop in Convex Geometry held at the Mathematical Sciences Research Institute in Berkeley, early in 1996. A fourth series was given by B. Bollobas, on rapid mixing and random volume algorithms; they are found els
Geometric approach to error correcting codes and reconstruction of signals
 INT. MATH. RES. NOT
, 2005
"... ..."
Fast solution of ℓ1norm minimization problems when the solution may be sparse
, 2006
"... The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slo ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slow for ℓ1 minimization in many largescale applications. The Homotopy method was originally proposed by Osborne et al. for solving noisy overdetermined ℓ1penalized least squares problems. We here apply it to solve the noiseless underdetermined ℓ1minimization problem min ‖x‖1 subject to y = Ax. We show that Homotopy runs much more rapidly than generalpurpose LP solvers when sufficient sparsity is present. Indeed, the method often has the following kstep solution property: if the underlying solution has only k nonzeros, the Homotopy method reaches that solution in only k iterative steps. When this property holds and k is small compared to the problem size, this means that ℓ1 minimization problems with ksparse solutions can be solved in a fraction of the cost of solving one fullsized linear system. We demonstrate this kstep solution property for two kinds of problem suites. First,
FINDING STRUCTURE WITH RANDOMNESS: PROBABILISTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS
"... Abstract. Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful t ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
Abstract. Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing lowrank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired lowrank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition
Nonasymptotic theory of random matrices: extreme singular values
 PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... ..."