Results 1  10
of
153
On the distribution of the largest eigenvalue in principal components analysis
 Ann. Statist
, 2001
"... Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribu ..."
Abstract

Cited by 197 (2 self)
 Add to MetaCart
Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribution on n degrees of freedom with identity covariance. Consider the limit of large p and n with n/p = γ ≥ 1. When centered by µ p = � √ n − 1 + √ p � 2 and scaled by σ p = � √ n − 1 + √ p��1 / √ n − 1 + 1 / √ p � 1/3 � the distribution of x �1 � approaches the Tracy–Widom lawof order 1, which is defined in terms of the Painlevé II differential equation and can be numerically evaluated and tabulated in software. Simulations showthe approximation to be informative for n and p as small as 5. The limit is derived via a corresponding result for complex Wishart matrices using methods from random matrix theory. The result suggests that some aspects of large p multivariate distribution theory may be easier to apply in practice than their fixed p counterparts. 1. Introduction. The
The Power of Convex Relaxation: NearOptimal Matrix Completion
, 2009
"... This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In ..."
Abstract

Cited by 131 (5 self)
 Add to MetaCart
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n × n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nrpolylog(n).
A proof of Alon’s second eigenvalue conjecture
, 2003
"... A dregular graph has largest or first (adjacency matrix) eigenvalue λ1 = d. Consider for an even d ≥ 4, a random dregular graph model formed from d/2 uniform, independent permutations on {1,...,n}. We shall show that for any ɛ>0 we have all eigenvalues aside from λ1 = d are bounded by 2 √ d − 1 +ɛ ..."
Abstract

Cited by 92 (1 self)
 Add to MetaCart
A dregular graph has largest or first (adjacency matrix) eigenvalue λ1 = d. Consider for an even d ≥ 4, a random dregular graph model formed from d/2 uniform, independent permutations on {1,...,n}. We shall show that for any ɛ>0 we have all eigenvalues aside from λ1 = d are bounded by 2 √ d − 1 +ɛwith probability 1 − O(n−τ), where τ = ⌈ � √ d − 1+1 � /2⌉−1. We also show that this probability is at most 1 − c/nτ ′, for a constant c and a τ ′ that is either τ or τ +1 (“more often ” τ than τ + 1). We prove related theorems for other models of random graphs, including models with d odd. These theorems resolve the conjecture of Alon, that says that for any ɛ>0andd, the second largest eigenvalue of “most ” random dregular graphs are at most 2 √ d − 1+ɛ (Alon did not specify precisely what “most ” should mean or what model of random graph one should take). 1
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
On the second eigenvalue and random walks in random dregular graphs
 Combinatorica
, 1991
"... The main goal of this paper is to estimate the magnitude of the second largest eigenvalue in absolute value, λ2, of (the adjacency matrix of) a random dregular graph, G. In order to do so, we study the probability that a random walk on a random graph returns to its originating vertex at the kth st ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
The main goal of this paper is to estimate the magnitude of the second largest eigenvalue in absolute value, λ2, of (the adjacency matrix of) a random dregular graph, G. In order to do so, we study the probability that a random walk on a random graph returns to its originating vertex at the kth step, for various values of k. Our main theorem about eigenvalues is that E {λ2(G)  m �
On the concentration of eigenvalues of random symmetric matrices
 Israel J. Math
, 2000
"... It is shown that for every 1 ≤ s ≤ n, the probability that the sth largest eigenvalue of a random symmetric nbyn matrix with independent random entries of absolute value at most 1 deviates from its median by more than t is at most 4e −t2 /32s 2. The main ingredient in the proof is Talagrand’s Ine ..."
Abstract

Cited by 62 (7 self)
 Add to MetaCart
It is shown that for every 1 ≤ s ≤ n, the probability that the sth largest eigenvalue of a random symmetric nbyn matrix with independent random entries of absolute value at most 1 deviates from its median by more than t is at most 4e −t2 /32s 2. The main ingredient in the proof is Talagrand’s Inequality for concentration of measure in product spaces. 1
On the Distribution of the Largest Principal Component
 ANN. STATIST
, 2000
"... Let x (1) denote square of the largest singular value of an n p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x (1) is the largest principal component of the covariance matrix X 0 X, or the largest eigenvalue of a p variate Wishart distribution on n degr ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Let x (1) denote square of the largest singular value of an n p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x (1) is the largest principal component of the covariance matrix X 0 X, or the largest eigenvalue of a p variate Wishart distribution on n degrees of freedom with identity covariance. Consider the limit of large p and n with n=p = 1: When centered by p = ( p n 1+ p p) 2 and scaled by p = ( p n 1+ p p)(1= p n 1+1= p p) 1=3 the distribution of x (1) approaches the TracyWidom law of order 1, which is dened in terms of the Painleve II dierential equation, and can be numerically evaluated and tabulated in software. Simulations show the approximation to be informative for n and p as small as 5. The limit is derived via a corresponding result for complex Wishart matrices using methods from random matrix theory. The result suggests that some aspects of large p multivariate distribution theory may be easier to ...
Eigenvalues in combinatorial optimization
, 1993
"... In the last decade many important applications of eigenvalues and eigenvectors of graphs in combinatorial optimization were discovered. The number and importance of these results is so fascinating that it makes sense to present this survey. ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
In the last decade many important applications of eigenvalues and eigenvectors of graphs in combinatorial optimization were discovered. The number and importance of these results is so fascinating that it makes sense to present this survey.
Loggases and random matrices
, 2010
"... method to calculate correlation functions for β = 1 random ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
method to calculate correlation functions for β = 1 random