Results 1  10
of
17
Multiuser Receivers for Randomly Spread Signals: Fundamental Limits with and without DecisionFeedback
 IEEE Trans. Inform. Theory
, 2000
"... Synchronous codedivision multipleaccess communication systems with randomly chosen spreading sequences and capacityachieving forward error correction coding are analyzed in terms of spectral efficiency. Emphasis is on the penalties paid by applying single user coding in conjuction with suboptimal ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
Synchronous codedivision multipleaccess communication systems with randomly chosen spreading sequences and capacityachieving forward error correction coding are analyzed in terms of spectral efficiency. Emphasis is on the penalties paid by applying single user coding in conjuction with suboptimal multiuser receivers as opposed to optimal joint decoding which involves complexity that is exponential in the number of users times the codeword length. The conventional, the decorrelating and the (reencoded) decorrelating decisionfeedback detectors are analyzed in the nonasymptotic case for spherical random sequences. The reencoded minimum mean squared error (MMSE) decisionfeedback receiver achieving the same performance as joint multiuser decoding for equal power users is shown to be suboptimal in the case of equal rates.
DETERMINISTIC EQUIVALENTS FOR CERTAIN FUNCTIONALS OF LARGE RANDOM MATRICES
, 2007
"... Consider an N × n random matrix Yn = (Y n ij) where the entries are given by Y n ij = σij(n) √ X n n ij, the X n ij being independent and identically distributed, centered with unit variance and satisfying some mild moment assumption. Consider now a deterministic N ×n matrix An whose columns and row ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
Consider an N × n random matrix Yn = (Y n ij) where the entries are given by Y n ij = σij(n) √ X n n ij, the X n ij being independent and identically distributed, centered with unit variance and satisfying some mild moment assumption. Consider now a deterministic N ×n matrix An whose columns and rows are uniformly bounded in the Euclidean norm. Let Σn = Yn + An. We prove in this article that there exists a deterministic N ×N matrixvalued function Tn(z) analytic in C −R + such that, almost surely, 1 lim
The empirical eigenvalue distribution of a Gram matrix: from independence to stationarity
 Markov Proc. Rel. Fields 11 (2005
"... Abstract. Consider a N × n matrix Zn = (Zn) where the individual entries are a j1j2 realization of a properly rescaled stationary gaussian random field: Z n 1 ∑ j1j2 = √ h(k1, k2)U(j1 − k1, j2 − k2), n (k1,k2)∈Z 2 where h ∈ ℓ1 (Z2) is a deterministic complex summable sequence and (U(j1, j2);(j1, j2 ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Abstract. Consider a N × n matrix Zn = (Zn) where the individual entries are a j1j2 realization of a properly rescaled stationary gaussian random field: Z n 1 ∑ j1j2 = √ h(k1, k2)U(j1 − k1, j2 − k2), n (k1,k2)∈Z 2 where h ∈ ℓ1 (Z2) is a deterministic complex summable sequence and (U(j1, j2);(j1, j2) ∈ Z2) is a sequence of independent complex gaussian random variables with mean zero and unit variance. The purpose of this article is to study the limiting empirical distribution of the eigenvalues of Gram random matrices such as ZnZ ∗ n and (Zn + An)(Zn + An) ∗ where An is a deterministic matrix with appropriate assumptions in the case where n → ∞ and N n → c ∈ (0, ∞). The proof relies on related results for matrices with independent but not identically distributed entries and substantially differs from related works in the literature (Boutet de Monvel et al. [3], Girko [7], etc.).
A CLT for Informationtheoretic statistics of Gram random matrices with a given variance profile
, 2008
"... Consider a N × n random matrix Yn = (Y n ij) where the entries are given by Y n σij(n) ij = √ X n n ij, the Xn ij being centered, independent and identically distributed random variables with unit variance and (σij(n); 1 ≤ i ≤ N,1 ≤ j ≤ n) being an array of numbers we shall refer to as a variance ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Consider a N × n random matrix Yn = (Y n ij) where the entries are given by Y n σij(n) ij = √ X n n ij, the Xn ij being centered, independent and identically distributed random variables with unit variance and (σij(n); 1 ≤ i ≤ N,1 ≤ j ≤ n) being an array of numbers we shall refer to as a variance profile. We study in this article the fluctuations of the random variable log det (YnY ∗ n + ρIN) where Y ∗ is the Hermitian adjoint of Y and ρ> 0 is an additional parameter. We prove that when centered and properly rescaled, this random variable satisfies a Central Limit Theorem (CLT) and has a Gaussian limit whose parameters are identified. A complete description of the scaling parameter is given; in particular it is shown that an additional term appears in this parameter in the case where the 4 th moment of the Xij’s differs from the 4 th moment of a Gaussian random variable. Such a CLT is of interest in the field of wireless communications.
Central limit theorem for linear eigenvalue statistics of random matrices with . . .
, 2009
"... We consider n × n real symmetric and Hermitian Wigner random matrices n −1/2 W with independent (modulo symmetry condition) entries and the (null) sample covariance matrices n −1 X ∗ X with independent entries of m × n matrix X. Assuming first that the 4th cumulant (excess) κ4 of entries of W and X ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We consider n × n real symmetric and Hermitian Wigner random matrices n −1/2 W with independent (modulo symmetry condition) entries and the (null) sample covariance matrices n −1 X ∗ X with independent entries of m × n matrix X. Assuming first that the 4th cumulant (excess) κ4 of entries of W and X is zero and that their 4th moments satisfy a Lindeberg type condition, we prove that linear statistics of eigenvalues of the above matrices satisfy the central limit theorem (CLT) as n → ∞, m → ∞, m/n → c ∈ [0, ∞) with the same variance as for Gaussian matrices if the test functions of statistics are smooth enough (essentially of the class C 5). This is done by using a simple “interpolation trick ” from the known results for the Gaussian matrices and the integration by parts, presented in the form of certain differentiation formulas. Then, by using a more elaborated version of the techniques, we prove the CLT in the case of nonzero excess of entries again for essentially C 5 test function. Here the variance of statistics contains an additional term proportional to κ4. The proofs of all limit theorems follow essentially the same scheme.
Concentration of measure and spectra of random matrices: with applications to correlation matrices, elliptical distributions and beyond
 THE ANNALS OF APPLIED PROBABILITY TO APPEAR
, 2009
"... We place ourselves in the setting of highdimensional statistical inference, where the number of variables p in a dataset of interest is of the same order of magnitude as the number of observations n. More formally we study the asymptotic properties of correlation and covariance matrices under the s ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We place ourselves in the setting of highdimensional statistical inference, where the number of variables p in a dataset of interest is of the same order of magnitude as the number of observations n. More formally we study the asymptotic properties of correlation and covariance matrices under the setting that p/n → ρ ∈ (0, ∞), for general population covariance. We show that spectral properties for large dimensional correlation matrices are similar to those of large dimensional covariance matrices, for a large class of models studied in random matrix theory. We also derive a MarčenkoPastur type system of equations for the limiting spectral distribution of covariance matrices computed from data with elliptical distributions and generalizations of this family. The motivation for this study comes partly from the possible relevance of such distributional assumptions to problems in econometrics and portfolio optimization, as well as robustness questions for certain classical random matrix results. A mathematical theme of the paper is the important use we make of concentration inequalities.
A law of large numbers for finiterange dependent random matrices
 Comm. Pure Appl. Math
"... Abstract. We consider random hermitian matrices in which distant abovediagonal entries are independent but nearby entries may be correlated. We find the limit of the empirical distribution of eigenvalues by combinatorial methods. We also prove that the limit has algebraic Stieltjes transform by an a ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. We consider random hermitian matrices in which distant abovediagonal entries are independent but nearby entries may be correlated. We find the limit of the empirical distribution of eigenvalues by combinatorial methods. We also prove that the limit has algebraic Stieltjes transform by an argument based on dimension theory of noetherian local rings. 1.
LIMIT THEOREMS FOR SPECTRA OF RANDOM MATRICES WITH MARTINGALE STRUCTURE
, 2003
"... We study classical ensembles of real symmetric random matrices introduced by Eugene Wigner. We discuss Stein’s method for the asymptotic approximation of expectations of functions of the normalized eigenvalue counting measure of high dimensional matrices. The method is based on a differential equati ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We study classical ensembles of real symmetric random matrices introduced by Eugene Wigner. We discuss Stein’s method for the asymptotic approximation of expectations of functions of the normalized eigenvalue counting measure of high dimensional matrices. The method is based on a differential equation for the density of the semicircular law.
No eigenvalues outside the support of the limiting empirical spectral distribution of a separable covariance matrix
, 2007
"... ..."
The spectrum of kernel random matrices
, 2007
"... We place ourselves in the setting of highdimensional statistical inference, where the number of variables p in a dataset of interest is of the same order of magnitude as the number of observations n. We consider the spectrum of certain kernel random matrices, in particular n × n matrices whose (i, ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We place ourselves in the setting of highdimensional statistical inference, where the number of variables p in a dataset of interest is of the same order of magnitude as the number of observations n. We consider the spectrum of certain kernel random matrices, in particular n × n matrices whose (i, j)th entry is f(X ′ i Xj/p) or f(‖Xi − Xj ‖ 2 /p), where p is the dimension of the data, and Xi are independent data vectors. Here f is assumed to be a locally smooth function. The study is motivated by questions arising in statistics and computer science, where these matrices are used to perform, among other things, nonlinear versions of principal component analysis. Surprisingly, we show that in highdimensions, and for the models we analyze, the problem becomes essentially linear which is at odds with heuristics sometimes used to justify the usage of these methods. The analysis also highlights certain peculiarities of models widely studied in random matrix theory and raises some questions about their relevance as tools to model highdimensional data encountered in practice. 1