Results 1  10
of
50
Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization
, 2007
"... The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative ..."
Abstract

Cited by 218 (15 self)
 Add to MetaCart
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NPhard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
No eigenvalues outside the support of the limiting spectral distribution of largedimensional sample covariance matrices
 ANNALS OF PROBABILITY 26
, 1998
"... We consider a class of matrices of the form Cn = (1/N)(Rn+σXn)(Rn+σXn) ∗, where Xn is an n × N matrix consisting of independent standardized complex entries, Rj is an n×N nonrandom matrix, and σ> 0. Among several applications, Cn can be viewed as a sample correlation matrix, where information is co ..."
Abstract

Cited by 107 (18 self)
 Add to MetaCart
We consider a class of matrices of the form Cn = (1/N)(Rn+σXn)(Rn+σXn) ∗, where Xn is an n × N matrix consisting of independent standardized complex entries, Rj is an n×N nonrandom matrix, and σ> 0. Among several applications, Cn can be viewed as a sample correlation matrix, where information is contained in (1/N)RnR ∗ n, but each column of Rn is contaminated by noise. As n → ∞, if n/N → c> 0, and the empirical distribution of the eigenvalues of (1/N)RnR ∗ n converge to a proper probability distribution, then the empirical distribution of the eigenvalues of Cn converges a.s. to a nonrandom limit. In this paper we show that, under certain conditions on Rn, for any closed interval in R + outside the support of the limiting distribution, then, almost surely, no eigenvalues of Cn will appear in this interval for all n large.
Eigenvalues of large sample covariance matrices of spiked population models
, 2006
"... We consider a spiked population model, proposed by Johnstone, whose population eigenvalues are all unit except for a few fixed eigenvalues. The question is to determine how the sample eigenvalues depend on the nonunit population ones when both sample size and population size become large. This pape ..."
Abstract

Cited by 81 (5 self)
 Add to MetaCart
We consider a spiked population model, proposed by Johnstone, whose population eigenvalues are all unit except for a few fixed eigenvalues. The question is to determine how the sample eigenvalues depend on the nonunit population ones when both sample size and population size become large. This paper completely determines the almost sure limits for a general class of samples. 1
A note on universality of the distribution of the largest eigenvalues in certain sample covariance matrices
 J. Statist. Phys
, 2002
"... Recently Johansson (21) and Johnstone (16) proved that the distribution of the (properly rescaled) largest principal component of the complex (real) Wishart matrix X g X(X t X) converges to the Tracy–Widom law as n, p (the dimensions of X) tend to. in some ratio n/p Q c>0.We extend these results in ..."
Abstract

Cited by 60 (3 self)
 Add to MetaCart
Recently Johansson (21) and Johnstone (16) proved that the distribution of the (properly rescaled) largest principal component of the complex (real) Wishart matrix X g X(X t X) converges to the Tracy–Widom law as n, p (the dimensions of X) tend to. in some ratio n/p Q c>0.We extend these results in two directions. First of all, we prove that the joint distribution of the first, second, third, etc. eigenvalues of a Wishart matrix converges (after a proper rescaling) to the Tracy–Widom distribution. Second of all, we explain how the combinatorial machinery developed for Wigner random matrices in refs. 27, 38, and 39 allows to extend the results by Johansson and Johnstone to the case of X with nonGaussian entries, provided n − p=O(p 1/3). We also prove that l max [ (n 1/2 +p 1/2) 2 +O(p 1/2 log(p)) (a.e.) for general c>0. KEY WORDS: Sample covariance matrices; principal component; Tracy– Widom distribution.
CLT for Linear Spectral Statistics of Large Dimensional Sample Covariance Matrices
, 2003
"... This paper shows their of rate of convergence to be 1/n by proving, after proper scaling, they form a tight sequence. Moreover, if EX 11 =0andEX11 =2, or if X11 and T n are real and EX 11 = 3, they are shown to have Gaussian limits ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
This paper shows their of rate of convergence to be 1/n by proving, after proper scaling, they form a tight sequence. Moreover, if EX 11 =0andEX11 =2, or if X11 and T n are real and EX 11 = 3, they are shown to have Gaussian limits
Hopfield models as generalized random mean field models. Mathematical aspects of spin glasses and neural networks
 3–89, Progr. Probab., 41 Birkhäuser
, 1998
"... Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large devi ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
Abstract: We give a comprehensive selfcontained review on the rigorous analysis of the thermodynamics of a class of random spin systems of mean field type whose most prominent example is the Hopfield model. We focus on the low temperature phase and the analysis of the Gibbs measures with large deviation techniques. There is a very detailed and complete picture in the regime of “small α”; a particularly satisfactory result concerns a nontrivial regime of parameters in which we prove 1) the convergence of the local “mean fields ” to gaussian random variables with constant variance and random mean; the random means are from site to site independent gaussians themselves; 2) “propagation of chaos”, i.e. factorization of the extremal infinite volume Gibbs measures, and 3) the correctness of the “replica symmetric solution ” of Amit, Gutfreund and Sompolinsky [AGS]. This last result was first proven by M. Talagrand [T4], using different techniques.
How Many Entries of a Typical Orthogonal Matrix Can Be Approximated by Independent Normals?
 ANN. PROBAB. 34(4): 1497–1529
, 2006
"... We solve an open problem of Diaconis that asks what are the largest orders of pn and qn such that Zn, the pn ×qn upper left block of a random matrix Γn which is uniformly distributed on the orthogonal group O(n), can be approximated by independent standard normals? This problem is solved by two diff ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
We solve an open problem of Diaconis that asks what are the largest orders of pn and qn such that Zn, the pn ×qn upper left block of a random matrix Γn which is uniformly distributed on the orthogonal group O(n), can be approximated by independent standard normals? This problem is solved by two different approximation methods. First, we show that the variation distance between the joint distribution of entries of Zn and that of pnqn independent standard normals goes to zero provided pn = o ( √ n) and qn = o ( √ n). We also show that the above variation distance does not go to zero if pn = [x √ n] and qn = [y √ n] for any positive numbers x and y. This says that the largest orders of pn and qn are o(n 1/2) in the sense of the above approximation. Second, suppose Γn = (γij)n×n is generated by performing the GramSchmidt algorithm on the columns of Yn = (yij)n×n where {yij;1 ≤ i,j ≤ n} are i.i.d. standard normals. We show that ǫn(m): = max1≤i≤n,1≤j≤m  √ nγij − yij  goes to zero in probability as long as m = mn = o(n/log n). We also prove that ǫn(mn) → 2 √ α in probability when mn = [nα/log n] for any α> 0. This says that mn = o(n/log n) is the largest order such that the entries of the first mn columns of Γn can be approximated simultaneously by independent standard normals.
Rayleigh fading multiantenna channels
 EURASIP Journal on Applied Signal Processing
, 2002
"... Information theoretic properties of flat fading channels with multiple antennas are investigated. Perfect channel knowledge at the receiver is assumed. Expressions for maximum information rates and outage probabilities are derived. The advantages of transmitter channel knowledge are determined and a ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
Information theoretic properties of flat fading channels with multiple antennas are investigated. Perfect channel knowledge at the receiver is assumed. Expressions for maximum information rates and outage probabilities are derived. The advantages of transmitter channel knowledge are determined and a critical threshold is found beyond which such channel knowledge gains very little. Asymptotic expressions for the error exponent are found. For the case of transmit diversity closed form expressions for the error exponent and cutoff rate are given. The use of orthogonal modulating signals is shown to be asymptotically optimal in terms of information rate.
Circular law, extreme singular values and potential theory, arXiv:0705.3773v2 [math.PR
, 2007
"... Abstract. Consider the empirical spectral distribution of complex random n×n matrix whose entries are independent and identically distributed random variables with mean zero and variance 1/n. In this paper, via applying potential theory in the complex plane and analyzing extreme singular values, we ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Abstract. Consider the empirical spectral distribution of complex random n×n matrix whose entries are independent and identically distributed random variables with mean zero and variance 1/n. In this paper, via applying potential theory in the complex plane and analyzing extreme singular values, we prove that this distribution converges, with probability one, to the uniform distribution over the unit disk in the complex plane, i.e. the well known circular law, under the finite fourth moment assumption on matrix elements. 1.