Results 1  10
of
101
A note on universality of the distribution of the largest eigenvalues in certain sample covariance matrices
 J. Statist. Phys
, 2002
"... Recently Johansson (21) and Johnstone (16) proved that the distribution of the (properly rescaled) largest principal component of the complex (real) Wishart matrix X g X(X t X) converges to the Tracy–Widom law as n, p (the dimensions of X) tend to. in some ratio n/p Q c>0.We extend these results in ..."
Abstract

Cited by 60 (3 self)
 Add to MetaCart
Recently Johansson (21) and Johnstone (16) proved that the distribution of the (properly rescaled) largest principal component of the complex (real) Wishart matrix X g X(X t X) converges to the Tracy–Widom law as n, p (the dimensions of X) tend to. in some ratio n/p Q c>0.We extend these results in two directions. First of all, we prove that the joint distribution of the first, second, third, etc. eigenvalues of a Wishart matrix converges (after a proper rescaling) to the Tracy–Widom distribution. Second of all, we explain how the combinatorial machinery developed for Wigner random matrices in refs. 27, 38, and 39 allows to extend the results by Johansson and Johnstone to the case of X with nonGaussian entries, provided n − p=O(p 1/3). We also prove that l max [ (n 1/2 +p 1/2) 2 +O(p 1/2 log(p)) (a.e.) for general c>0. KEY WORDS: Sample covariance matrices; principal component; Tracy– Widom distribution.
of the Local Spacing Distribution in Certain Ensembles of Hermitian Wigner
 Matrices, Commun. Math. Phys
, 2001
"... Abstract. Consider an N × N hermitian random matrix with independent entries, not necessarily Gaussian, a so called Wigner matrix. It has been conjectured that the local spacing distribution, i.e. the distribution of the distance between nearest neighbour eigenvalues in some part of the spectrum is, ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
Abstract. Consider an N × N hermitian random matrix with independent entries, not necessarily Gaussian, a so called Wigner matrix. It has been conjectured that the local spacing distribution, i.e. the distribution of the distance between nearest neighbour eigenvalues in some part of the spectrum is, in the limit as N → ∞, the same as that of hermitian random matrices from GUE. We prove this conjecure for a certain subclass of hermitian Wigner matrices. 1. Introduction and
Large n limit of Gaussian random matrices with external source, part III: double scaling limit in the critical case, in preparation
"... Abstract. We continue the study of the Hermitian random matrix ensemble with external source ..."
Abstract

Cited by 47 (14 self)
 Add to MetaCart
Abstract. We continue the study of the Hermitian random matrix ensemble with external source
The RiemannHilbert approach to strong asymptotics for orthogonal polynomials on [1, 1]
"... We consider polynomials that are orthogonal on [1, 1] with respect to a modified Jacobi weight (1  x) # (1 + x) # h(x), with #, # > 1 and h real analytic and stricly positive on [1, 1]. We obtain full asymptotic expansions for the monic and orthonormal polynomials outside the interval [ ..."
Abstract

Cited by 44 (23 self)
 Add to MetaCart
We consider polynomials that are orthogonal on [1, 1] with respect to a modified Jacobi weight (1  x) # (1 + x) # h(x), with #, # > 1 and h real analytic and stricly positive on [1, 1]. We obtain full asymptotic expansions for the monic and orthonormal polynomials outside the interval [1, 1], for the recurrence coe#cients and for the leading coe#cients of the orthonormal polynomials. We also deduce asymptotic behavior for the Hankel determinants. For the asymptotic analysis we use the steepest descent technique for RiemannHilbert problems developed by Deift and Zhou, and applied to orthogonal polynomials on the real line by Deift, Kriecherbauer, McLaughlin, Venakides, and Zhou. In the steepest descent method we will use the Szego function associated with the weight and for the local analysis around the endpoints 1 we use Bessel functions of appropriate order, whereas Deift et al. use Airy functions. 1 Supported by FWO research project G.0176.02 and by INTAS project 00272 2 Supported by NSF grant #DMS9970328 3 Supported by FWO research project G.0184.01 and by INTAS project 00272 4 Research Assistant of the Fund for Scientific Research  Flanders (Belgium) 1 1
Double scaling limit in the random matrix model: the RiemannHilbert approach
"... Abstract. We derive the double scaling limit of eigenvalue correlations in the random matrix model at critical points and we relate it to a nonlinear hierarchy of ordinary differential equations. 1. ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
Abstract. We derive the double scaling limit of eigenvalue correlations in the random matrix model at critical points and we relate it to a nonlinear hierarchy of ordinary differential equations. 1.
Generic Behavior of the Density of States in Random Matrix Theory and Equilibrium Problems in the Presence of Real Analytic External Fields
, 2000
"... The equilibrium measure in the presence of an external field plays a role in a number of areas in analysis, for example in random matrix theory: the limiting mean density of eigenvalues is precisely the density of the equilibrium measure. Typical behavior for the equilibrium measure is: 1. it is pos ..."
Abstract

Cited by 36 (14 self)
 Add to MetaCart
The equilibrium measure in the presence of an external field plays a role in a number of areas in analysis, for example in random matrix theory: the limiting mean density of eigenvalues is precisely the density of the equilibrium measure. Typical behavior for the equilibrium measure is: 1. it is positive on the interior of a finite number of intervals, 2. it vanishes like a square root at endpoints, and 3. outside the support, there is strict inequality in the EulerLagrange variational conditions. If these conditions hold, then the limiting local eigenvalue statistics is loosely described by a "bulk" in which there is universal behavior involving the sine kernel, and "edge effects" in which there is a universal behavior involving the Airy kernel. Through techniques from potential theory and integrable systems, we show that this "regular" behavior is generic for equilibrium measures associated with real analytic external fields. In particular, we show that for any oneparameter family of external fields V=c the equilibrium measure exhibits this regular behavior, except for an at most countable number of values of c. We discuss applications of our results to random matrices, orthogonal polynomials and integrable systems.
Uniform Asymptotics for Polynomials Orthogonal With Respect to a General Class of Discrete Weights and Universality Results for Associated Ensembles: Announcement of Results
 INT. MATH. RES. NOT
, 2003
"... ..."
Asymptotics of the partition function for random matrices via RiemannHilbert techniques, and applications to graphical enumeration
 Internat. Math. Research Notices
, 2003
"... Abstract. We study the partition function from random matrix theory using a well known connection to orthogonal polynomials, and a recently developed RiemannHilbert approach to the computation of detailed asymptotics for these orthogonal polynomials. We obtain the first proof of a complete large N ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
Abstract. We study the partition function from random matrix theory using a well known connection to orthogonal polynomials, and a recently developed RiemannHilbert approach to the computation of detailed asymptotics for these orthogonal polynomials. We obtain the first proof of a complete large N expansion for the partition function, for a general class of probability measures on matrices, originally conjectured by Bessis, Itzykson, and Zuber. We prove that the coefficients in the asymptotic expansion are analytic functions of parameters in the original probability measure, and that they are generating functions for the enumeration of labelled maps according to genus and valence. Central to the analysis is a large N expansion for the mean density of eigenvalues, uniformly valid on the entire real axis.