Results 1  10
of
82
Compressed sensing: how sharp is the restricted isometry property?
, 2009
"... Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sens ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sensing, a large fraction of which have focused on the design and analysis of algorithms to recover a signal from its compressed measurements. The Restricted Isometry Property (RIP) has become a ubiquitous property assumed in their analysis. We present the best known bounds on the RIP, and in the process illustrate the way in which the combinatorial nature of compressed sensing is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners.
High dimensional statistical inference and random matrices
 IN: PROCEEDINGS OF INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2006
"... Multivariate statistical analysis is concerned with observations on several variables which are thought to possess some degree of interdependence. Driven by problems in genetics and the social sciences, it first flowered in the earlier half of the last century. Subsequently, random matrix theory ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
Multivariate statistical analysis is concerned with observations on several variables which are thought to possess some degree of interdependence. Driven by problems in genetics and the social sciences, it first flowered in the earlier half of the last century. Subsequently, random matrix theory (RMT) developed, initially within physics, and more recently widely in mathematics. While some of the central objects of study in RMT are identical to those of multivariate statistics, statistical theory was slow to exploit the connection. However, with vast data collection ever more common, data sets now often have as many or more variables than the number of individuals observed. In such contexts, the techniques and results of RMT have much to offer multivariate statistics. The paper reviews some of the progress to date.
On the Numerical Evaluation of Distributions in Random Matrix Theory: A Review
, 2010
"... In this paper we review and compare the numerical evaluation of those probability distributions in random matrix theory that are analytically represented in terms of Painlevé transcendents or Fredholm determinants. Concrete examples for the Gaussian and Laguerre (Wishart) βensembles and their var ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
(Show Context)
In this paper we review and compare the numerical evaluation of those probability distributions in random matrix theory that are analytically represented in terms of Painlevé transcendents or Fredholm determinants. Concrete examples for the Gaussian and Laguerre (Wishart) βensembles and their various scaling limits are discussed. We argue that the numerical approximation of Fredholm determinants is the conceptually more simple and efficient of the two approaches, easily generalized to the computation of joint probabilities and correlations. Having the means for extensive numerical explorations at hand, we discovered new and surprising determinantal formulae for the kth largest (or smallest) level in the edge scaling limits of the Orthogonal and Symplectic Ensembles; formulae that in turn led to improved numerical evaluations. The paper comes with a toolbox of Matlab functions that facilitates further mathematical experiments by the reader.
Limited Feedbackbased Block Diagonalization for the MIMO Broadcast Channel
"... Block diagonalization is a linear precoding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multiuser interference is experienced at any of the receivers. This lowcomplexity scheme operates only a fe ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
Block diagonalization is a linear precoding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multiuser interference is experienced at any of the receivers. This lowcomplexity scheme operates only a few dB away from capacity but requires very accurate channel knowledge at the transmitter. We consider a limited feedback system where each receiver knows its channel perfectly, but the transmitter is only provided with a finite number of channel feedback bits from each receiver. Using a random quantization argument, we quantify the throughput loss due to imperfect channel knowledge as a function of the feedback level. The quality of channel knowledge must improve proportional to the SNR in order to prevent interferencelimitations, and we show that scaling the number of feedback bits linearly with the system SNR is sufficient to maintain a bounded rate loss. Finally, we compare our quantization strategy to an analog feedback scheme and show the superiority of quantized feedback. I.
IMPROVED BOUNDS ON RESTRICTED ISOMETRY CONSTANTS FOR GAUSSIAN MATRICES
"... Abstract. The Restricted Isometry Constants (RIC) of a matrix A measures how close to an isometry is the action of A on vectors with few nonzero entries, measured in the ℓ2 norm. Specifically, the upper and lower RIC of a matrix A of size n × N is the maximum and the minimum deviation from unity (on ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The Restricted Isometry Constants (RIC) of a matrix A measures how close to an isometry is the action of A on vectors with few nonzero entries, measured in the ℓ2 norm. Specifically, the upper and lower RIC of a matrix A of size n × N is the maximum and the minimum deviation from unity (one) of the largest and smallest, respectively, square of singular values of all `N ´ matrices formed by taking k columns from A. Calculation of the k RIC is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC in some range of problem sizes (k, n, N). We provide the best known bound on the RIC for Gaussian matrices, which is also the smallest known bound on the RIC for any large rectangular matrix. Improvements over prior bounds are achieved by exploiting similarity of singular values for matrices which share a substantial number of columns. Key words. Wishart Matrices, Compressed sensing, sparse approximation, restricted isometry constant, phase transitions, Gaussian matrices, singular values of random matrices.
Asymptotic behavior of random determinants
 in the Laguerre, Gram and Jacobi ensembles, arXiv math.PR/0607767
, 2007
"... Abstract. We consider properties of determinants of some random symmetric matrices issued from multivariate statistics: Wishart/Laguerre ensemble (sample covariance matrices), Uniform Gram ensemble (sample correlation matrices) and Jacobi ensemble (MANOVA). If n is the size of the sample, r ≤ n the ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We consider properties of determinants of some random symmetric matrices issued from multivariate statistics: Wishart/Laguerre ensemble (sample covariance matrices), Uniform Gram ensemble (sample correlation matrices) and Jacobi ensemble (MANOVA). If n is the size of the sample, r ≤ n the number of variates and Xn,r such a matrix, a generalization of the Bartletttype theorems gives a decomposition of det Xn,r into a product of r independent gamma or beta random variables. For n fixed, we study the evolution as r grows, and then take the limit of large r and n with r/n = t ≤ 1. We derive limit theorems for the sequence of processes with independent increments {n −1 log detX n,⌊nt⌋, t ∈ [0, T]}n for T ≤ 1: convergence in probability, invariance principle, large deviations. Since the logarithm of the determinant is a linear statistic of the empirical spectral distribution, we connect the results for marginals (fixed t) with those obtained by the spectral method. Actually, all the results hold true for log gases or β models, if we define the determinant as the product of charges. The classical matrix models (real, complex, and quaternionic) correspond to the particular values β = 1, 2, 4 of the Dyson parameter. 1.
Application of random matrix theory to multivariate statistics
"... This is an expository account of the edge eigenvalue distributions in random matrix theory and their application in multivariate statistics. The emphasis is on the Painlevé representations of these distribution functions. ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
This is an expository account of the edge eigenvalue distributions in random matrix theory and their application in multivariate statistics. The emphasis is on the Painlevé representations of these distribution functions.