Results 1  10
of
109
Nonlinear dimensionality reduction by locally linear embedding
 SCIENCE
, 2000
"... Many areas of science ..."
Geometric bounds for eigenvalues of Markov chains
, 1991
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 281 (13 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
On the distribution of the largest eigenvalue in principal components analysis
 Ann. Statist
, 2001
"... Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribu ..."
Abstract

Cited by 197 (2 self)
 Add to MetaCart
Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribution on n degrees of freedom with identity covariance. Consider the limit of large p and n with n/p = γ ≥ 1. When centered by µ p = � √ n − 1 + √ p � 2 and scaled by σ p = � √ n − 1 + √ p��1 / √ n − 1 + 1 / √ p � 1/3 � the distribution of x �1 � approaches the Tracy–Widom lawof order 1, which is defined in terms of the Painlevé II differential equation and can be numerically evaluated and tabulated in software. Simulations showthe approximation to be informative for n and p as small as 5. The limit is derived via a corresponding result for complex Wishart matrices using methods from random matrix theory. The result suggests that some aspects of large p multivariate distribution theory may be easier to apply in practice than their fixed p counterparts. 1. Introduction. The
Logarithmic Sobolev inequality and finite markov chains
, 1996
"... This is an expository paper on the use of logarithmic Sobolev inequalities for bounding rates of convergence of Markov chains on finite state spaces to their stationary distributions. Logarithmic Sobolev inequalities complement eigenvalue techniques and work for nonreversible chains in continuous ti ..."
Abstract

Cited by 113 (11 self)
 Add to MetaCart
This is an expository paper on the use of logarithmic Sobolev inequalities for bounding rates of convergence of Markov chains on finite state spaces to their stationary distributions. Logarithmic Sobolev inequalities complement eigenvalue techniques and work for nonreversible chains in continuous time. Some aspects of the theory simplify considerably with finite state spaces and we are able to give a selfcontained development. Examples of applications include the study of a Metropolis chain for the binomial distribution, sharp results for natural chains on the box of side n in d dimensions and improved rates for exclusion processes. We also show that for most rregular graphs the logSobolev constant is of smaller order than the spectral gap. The logSobolev constant of the asymmetric twopoint space is computed exactly as well as the logSobolev constant of the complete graph on n points.
No eigenvalues outside the support of the limiting spectral distribution of largedimensional sample covariance matrices
 ANNALS OF PROBABILITY 26
, 1998
"... We consider a class of matrices of the form Cn = (1/N)(Rn+σXn)(Rn+σXn) ∗, where Xn is an n × N matrix consisting of independent standardized complex entries, Rj is an n×N nonrandom matrix, and σ> 0. Among several applications, Cn can be viewed as a sample correlation matrix, where information is co ..."
Abstract

Cited by 107 (18 self)
 Add to MetaCart
We consider a class of matrices of the form Cn = (1/N)(Rn+σXn)(Rn+σXn) ∗, where Xn is an n × N matrix consisting of independent standardized complex entries, Rj is an n×N nonrandom matrix, and σ> 0. Among several applications, Cn can be viewed as a sample correlation matrix, where information is contained in (1/N)RnR ∗ n, but each column of Rn is contaminated by noise. As n → ∞, if n/N → c> 0, and the empirical distribution of the eigenvalues of (1/N)RnR ∗ n converge to a proper probability distribution, then the empirical distribution of the eigenvalues of Cn converges a.s. to a nonrandom limit. In this paper we show that, under certain conditions on Rn, for any closed interval in R + outside the support of the limiting distribution, then, almost surely, no eigenvalues of Cn will appear in this interval for all n large.
TracyWidom limit for the largest eigenvalue of a large class of complex sample covariance matrices
 ANN. PROBAB
, 2007
"... We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n × p matrix, and let its rows be i.i.d. complex normal vectors ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n × p matrix, and let its rows be i.i.d. complex normal vectors with mean 0 and covariance �p. We show that for a large class of covariance matrices �p, the largest eigenvalue of X ∗ X is asymptotically distributed (after recentering and rescaling) as the Tracy–Widom distribution that appears in the study of the Gaussian unitary ensemble. We give explicit formulas for the centering and scaling sequences that are easy to implement and involve only the spectral distribution of the population covariance, n and p. The main theorem applies to a number of covariance models found in applications. For example, wellbehaved Toeplitz matrices as well as covariance matrices whose spectral distribution is a sum of atoms (under some conditions on the mass of the atoms) are among the models the theorem can handle. Generalizations of the theorem to certain spiked versions of our models and a.s. results about the largest eigenvalue are given. We also discuss a simple corollary that does not require normality of the entries of the data matrix and some consequences for applications in multivariate statistics.
On the empirical distribution of eigenvalues of large dimensional informationplusnoise type matrices
 J. Multivariate Anal
, 2007
"... Let Xn be n×N containing i.i.d. complex entries and unit variance (sum of variances of real and imaginary parts equals 1), σ>0 constant, and Rn an n × N random matrix independent of Xn. Assume, almost surely, as n →∞, the empirical distribution function (e.d.f.) of the eigenvalues of 1 N RnR ∗ n con ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Let Xn be n×N containing i.i.d. complex entries and unit variance (sum of variances of real and imaginary parts equals 1), σ>0 constant, and Rn an n × N random matrix independent of Xn. Assume, almost surely, as n →∞, the empirical distribution function (e.d.f.) of the eigenvalues of 1 N RnR ∗ n converges in distribution to a nonrandom probability distribution function (p.d.f.), and the ratio n N tends to a positive number. Then it is shown that, almost surely, the e.d.f. of the eigenvalues of 1 N (Rn + σXn)(Rn + σXn) ∗ converges in distribution. The limit is nonrandom and is characterized in terms of its Stieltjes transform, which satisfies a certain equation. 1.
Estimation of (near) lowrank matrices with noise and highdimensional scaling
"... We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Me ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Mestimator based on regularization by the traceornuclearnormovermatrices, andanalyze its performance under highdimensional scaling. We provide nonasymptotic bounds on the Frobenius norm error that hold for a generalclassofnoisyobservationmodels,and apply to both exactly lowrank and approximately lowrank matrices. We then illustrate their consequences for a number of specific learning models, including lowrank multivariate or multitask regression, system identification in vector autoregressive processes, and recovery of lowrank matrices from random projections. Simulations show excellent agreement with the highdimensional scaling of the error predicted by our theory. 1.
Underwater video mosaics as visual navigation maps
 Möller and S. Posch: Iconic Scene Memory for HRI 22 [HGS02
"... This paper presents a set of algorithms for the creation of underwater mosaics and illustrates their use as visual maps for underwater vehicle navigation. First, we describe the automatic creation of video mosaics, which deals with the problem of image motion estimation in a robust and automatic way ..."
Abstract

Cited by 34 (11 self)
 Add to MetaCart
This paper presents a set of algorithms for the creation of underwater mosaics and illustrates their use as visual maps for underwater vehicle navigation. First, we describe the automatic creation of video mosaics, which deals with the problem of image motion estimation in a robust and automatic way. The motion estimation is based on a initial matching of corresponding areas over pairs of images, followed by the use of a robust matching technique, which can cope with a high percentage of incorrect matches. Several motion models, established under the projective geometry framework, allow for the creation of high quality mosaics where no assumptions are made about the camera motion. Several tests were run on underwater image sequences, testifying to the good performance of the implemented matching and registration methods. Next, we deal with the issue of determining the 3D position and orientation of a vehicle from new views of a previously created mosaic. The problem of pose estimation is tackled, using the available information on the camera intrinsic parameters. This information ranges from the full knowledge to the case where they are estimated using a selfcalibration technique based on the analysis of an image sequence captured under pure rotation. The performance of the 3D positioning algorithms is evaluated using images for which accurate ground truth is available. c ○ 2000 Academic Press