Results 1  10
of
30
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 832 (16 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 662 (15 self)
 Add to MetaCart
This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (‖x‖ℓ1:= i xi) min g∈R n ‖y − Ag‖ℓ1 provided that the support of the vector of errors is not too large, ‖e‖ℓ0: = {i: ei ̸= 0}  ≤ ρ · m for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work [5]. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
Concentration of the Spectral Measure for Large Matrices
, 2000
"... We derive concentration inequalities for functions of the empirical measure of eigenvalues for large, random, self adjoint matrices, with not necessarily Gaussian entries. The results presented apply in particular to nonGaussian Wigner and Wishart matrices. We also provide concentration bounds for ..."
Abstract

Cited by 65 (11 self)
 Add to MetaCart
We derive concentration inequalities for functions of the empirical measure of eigenvalues for large, random, self adjoint matrices, with not necessarily Gaussian entries. The results presented apply in particular to nonGaussian Wigner and Wishart matrices. We also provide concentration bounds for non commutative functionals of random matrices. 1 Introduction and statement of results Consider a random N N Hermitian matrix X with i.i.d. complex entries (except for the symmetry constraint) satisfying a moment condition. It is well known since Wigner [28] that the spectral measure of N 1=2 X converges to the semicircle law. This observation has been generalized to a large class of matrices, e.g. sample covariance matrices of the form XRX where R is a deterministic diagonal matrix ([19]), band matrices (see [5, 16, 20]), etc. For the Wigner case, this convergence has been supplemented by Central Limit Theorems, see [15] for the case of Gaussian entries and [17], [22] for the gen...
Smallest singular value of random matrices and geometry of random polytopes
 Adv. Math
, 2005
"... geometry of random polytopes ..."
Nonasymptotic theory of random matrices: extreme singular values
 PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... ..."
Random matrices: The distribution of the smallest singular values
, 2009
"... Let ξ be a realvalued random variable of mean zero and variance 1. Let Mn(ξ) denote the n × n random matrix whose entries are iid copies of ξ and σn(Mn(ξ)) denote the least singular value of Mn(ξ). The quantity σn(Mn(ξ)) 2 is thus the least eigenvalue of the Wishart matrix MnM ∗ n. We show that ( ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Let ξ be a realvalued random variable of mean zero and variance 1. Let Mn(ξ) denote the n × n random matrix whose entries are iid copies of ξ and σn(Mn(ξ)) denote the least singular value of Mn(ξ). The quantity σn(Mn(ξ)) 2 is thus the least eigenvalue of the Wishart matrix MnM ∗ n. We show that (under a finite moment assumption) the probability distribution nσn(Mn(ξ)) 2 is universal in the sense that it does not depend on the distribution of ξ. In particular, it converges to the same limiting distribution as in the special case when ξ is real gaussian. (The limiting distribution was computed explicitly in this case by Edelman.) We also proved a similar result for complexvalued random variables of mean zero, with real and imaginary parts having variance 1/2 and covariance zero. Similar results are also obtained for the joint distribution of the bottom k singular values of Mn(ξ) for any fixed k (or even for k growing as a small power of n) and for rectangular matrices. Our approach is motivated by the general idea of “property testing ” from combinatorics and theoretical computer science. This seems to be a new approach in the study of spectra of random matrices and combines tools from various areas of mathematics
Eigenvalue density of the Wishart matrix and large deviations
, 1998
"... A large deviation theorem is obtained for a certain sequence of random measures which includes the empirical eigenvalue distribution of Wishart matrices, as the matrix size tends to infinity. The rate function is convex and one of its ingredients is the logarithmic energy. In case of the singular Wi ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
A large deviation theorem is obtained for a certain sequence of random measures which includes the empirical eigenvalue distribution of Wishart matrices, as the matrix size tends to infinity. The rate function is convex and one of its ingredients is the logarithmic energy. In case of the singular Wishart matrix, the limit distribution has an atom and the rate function is infinite on absolute continuous measures.
Asymptotic freeness almost everywhere for random matrices
 Acta Sci. Math. (Szeged
, 2000
"... Voiculescu’s asymptotic freeness result for random matrices is improved to the sense of almost everywhere convergence. The asymptotic freeness almost everywhere is first shown for standard unitary matrices based on the computation of multiple moments of their entries, and then it is shown for rather ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Voiculescu’s asymptotic freeness result for random matrices is improved to the sense of almost everywhere convergence. The asymptotic freeness almost everywhere is first shown for standard unitary matrices based on the computation of multiple moments of their entries, and then it is shown for rather general unitarily invariant selfadjoint random matrices (in particular, standard selfadjoint Gaussian matrices) by applying the first result to the unitary parts of their diagonalization. Biunitarily invariant nonselfadjoint random matrices are also treated via polar decomposition.
Free product formulae for quantum permutation groups
 J. Math. Inst. Jussieu
"... Abstract. Associated to a finite graph X is its quantum automorphism group G(X). We prove a formula of type G(X ∗ Y) = G(X) ∗w G(Y), where ∗w is a free wreath product. Then we discuss representation theory of free wreath products, with the conjectural formula µ(G ∗w H) = µ(G) ⊠ µ(H), where µ is t ..."
Abstract

Cited by 15 (14 self)
 Add to MetaCart
Abstract. Associated to a finite graph X is its quantum automorphism group G(X). We prove a formula of type G(X ∗ Y) = G(X) ∗w G(Y), where ∗w is a free wreath product. Then we discuss representation theory of free wreath products, with the conjectural formula µ(G ∗w H) = µ(G) ⊠ µ(H), where µ is the associated spectral measure. This is verified in two situations: one using free probability techniques, the other one using planar algebras.