Results 1  10
of
221
On the distribution of the largest eigenvalue in principal components analysis
 Ann. Statist
, 2001
"... Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribu ..."
Abstract

Cited by 262 (3 self)
 Add to MetaCart
Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribution on n degrees of freedom with identity covariance. Consider the limit of large p and n with n/p = γ ≥ 1. When centered by µ p = � √ n − 1 + √ p � 2 and scaled by σ p = � √ n − 1 + √ p��1 / √ n − 1 + 1 / √ p � 1/3 � the distribution of x �1 � approaches the Tracy–Widom lawof order 1, which is defined in terms of the Painlevé II differential equation and can be numerically evaluated and tabulated in software. Simulations showthe approximation to be informative for n and p as small as 5. The limit is derived via a corresponding result for complex Wishart matrices using methods from random matrix theory. The result suggests that some aspects of large p multivariate distribution theory may be easier to apply in practice than their fixed p counterparts. 1. Introduction. The
The Power of Convex Relaxation: NearOptimal Matrix Completion
, 2009
"... This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In ..."
Abstract

Cited by 235 (6 self)
 Add to MetaCart
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n × n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nrpolylog(n).
The analogues of entropy and of Fisher’s information measure in free probability theory
 I. Comm. Math. Phys
, 1993
"... Dedicated to Huzuhiro Araki Abstract. Analogues of the entropy and Fisher information measure for random variables in the context of free probability theory are introduced. Monotonicity properties and an analogue of the CramerRao inequality are proved. ..."
Abstract

Cited by 188 (11 self)
 Add to MetaCart
Dedicated to Huzuhiro Araki Abstract. Analogues of the entropy and Fisher information measure for random variables in the context of free probability theory are introduced. Monotonicity properties and an analogue of the CramerRao inequality are proved.
A proof of Alon’s second eigenvalue conjecture
, 2003
"... A dregular graph has largest or first (adjacency matrix) eigenvalue λ1 = d. Consider for an even d ≥ 4, a random dregular graph model formed from d/2 uniform, independent permutations on {1,...,n}. We shall show that for any ɛ>0 we have all eigenvalues aside from λ1 = d are bounded by 2 √ d − 1 ..."
Abstract

Cited by 123 (1 self)
 Add to MetaCart
A dregular graph has largest or first (adjacency matrix) eigenvalue λ1 = d. Consider for an even d ≥ 4, a random dregular graph model formed from d/2 uniform, independent permutations on {1,...,n}. We shall show that for any ɛ>0 we have all eigenvalues aside from λ1 = d are bounded by 2 √ d − 1 +ɛwith probability 1 − O(n−τ), where τ = ⌈ � √ d − 1+1 � /2⌉−1. We also show that this probability is at most 1 − c/nτ ′, for a constant c and a τ ′ that is either τ or τ +1 (“more often ” τ than τ + 1). We prove related theorems for other models of random graphs, including models with d odd. These theorems resolve the conjecture of Alon, that says that for any ɛ>0andd, the second largest eigenvalue of “most ” random dregular graphs are at most 2 √ d − 1+ɛ (Alon did not specify precisely what “most ” should mean or what model of random graph one should take). 1
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
Universality at the edge of the spectrum in Wigner random matrices
, 1999
"... We prove universality at the edge for rescaled correlation functions of Wigner random matrices in the limit n → +∞. As a corollary, we show that, after proper rescaling, the 1st, 2nd, 3rd, etc. eigenvalues of Wigner random hermitian (resp. real symmetric) matrix weakly converge to the distributions ..."
Abstract

Cited by 115 (8 self)
 Add to MetaCart
We prove universality at the edge for rescaled correlation functions of Wigner random matrices in the limit n → +∞. As a corollary, we show that, after proper rescaling, the 1st, 2nd, 3rd, etc. eigenvalues of Wigner random hermitian (resp. real symmetric) matrix weakly converge to the distributions established by Tracy and Widom in G.U.E. (G.O.E.) cases.
On the Second Eigenvalue and Random Walks in Random dRegular Graphs
, 1993
"... The main goal of this paper is to estimate the magnitude of the second largest eigenvalue in absolute value, 2 , of (the adjacency matrix of) a random dregular graph, G. In order to do so, we study the probability that a random walk on a random graph returns to its originating vertex at the kth st ..."
Abstract

Cited by 66 (10 self)
 Add to MetaCart
(Show Context)
The main goal of this paper is to estimate the magnitude of the second largest eigenvalue in absolute value, 2 , of (the adjacency matrix of) a random dregular graph, G. In order to do so, we study the probability that a random walk on a random graph returns to its originating vertex at the kth step, for various values of k. Our main theorem about eigenvalues is that E fj 2 (G)j m g / 2 p 2d \Gamma 1 / 1 log d p 2d O ` 1 p d ' ! O ` d 3=2 log log n log n ' !m for any m 2 \Xi log n b p 2d \Gamma 1=2c= log d \Pi , where E f g denotes the expected value over a certain probablity space of 2dregular graphs. It follows, for example, that for fixed d the second eigenvalue's magnitude is no more than 2 p 2d \Gamma 1 2 log d C 0 with probability 1 \Gamma n \GammaC for constants C and C 0 for sufficiently large n.
On the concentration of eigenvalues of random symmetric matrices
 Israel J. Math
, 2000
"... It is shown that for every 1 ≤ s ≤ n, the probability that the sth largest eigenvalue of a random symmetric nbyn matrix with independent random entries of absolute value at most 1 deviates from its median by more than t is at most 4e −t2 /32s 2. The main ingredient in the proof is Talagrand’s Ine ..."
Abstract

Cited by 65 (8 self)
 Add to MetaCart
(Show Context)
It is shown that for every 1 ≤ s ≤ n, the probability that the sth largest eigenvalue of a random symmetric nbyn matrix with independent random entries of absolute value at most 1 deviates from its median by more than t is at most 4e −t2 /32s 2. The main ingredient in the proof is Talagrand’s Inequality for concentration of measure in product spaces. 1
Loggases and random matrices
, 2010
"... method to calculate correlation functions for β = 1 random ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
(Show Context)
method to calculate correlation functions for β = 1 random
On the Distribution of the Largest Principal Component
 ANN. STATIST
, 2000
"... Let x (1) denote square of the largest singular value of an n p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x (1) is the largest principal component of the covariance matrix X 0 X, or the largest eigenvalue of a p variate Wishart distribution on n degr ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
Let x (1) denote square of the largest singular value of an n p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x (1) is the largest principal component of the covariance matrix X 0 X, or the largest eigenvalue of a p variate Wishart distribution on n degrees of freedom with identity covariance. Consider the limit of large p and n with n=p = 1: When centered by p = ( p n 1+ p p) 2 and scaled by p = ( p n 1+ p p)(1= p n 1+1= p p) 1=3 the distribution of x (1) approaches the TracyWidom law of order 1, which is dened in terms of the Painleve II dierential equation, and can be numerically evaluated and tabulated in software. Simulations show the approximation to be informative for n and p as small as 5. The limit is derived via a corresponding result for complex Wishart matrices using methods from random matrix theory. The result suggests that some aspects of large p multivariate distribution theory may be easier to ...