Results 1 
9 of
9
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 662 (15 self)
 Add to MetaCart
This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (‖x‖ℓ1:= i xi) min g∈R n ‖y − Ag‖ℓ1 provided that the support of the vector of errors is not too large, ‖e‖ℓ0: = {i: ei ̸= 0}  ≤ ρ · m for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work [5]. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
High dimensional statistical inference and random matrices
 IN: PROCEEDINGS OF INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2006
"... Multivariate statistical analysis is concerned with observations on several variables which are thought to possess some degree of interdependence. Driven by problems in genetics and the social sciences, it first flowered in the earlier half of the last century. Subsequently, random matrix theory ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Multivariate statistical analysis is concerned with observations on several variables which are thought to possess some degree of interdependence. Driven by problems in genetics and the social sciences, it first flowered in the earlier half of the last century. Subsequently, random matrix theory (RMT) developed, initially within physics, and more recently widely in mathematics. While some of the central objects of study in RMT are identical to those of multivariate statistics, statistical theory was slow to exploit the connection. However, with vast data collection ever more common, data sets now often have as many or more variables than the number of individuals observed. In such contexts, the techniques and results of RMT have much to offer multivariate statistics. The paper reviews some of the progress to date.
On asymptotics of eigenvectors of large sample covariance matrix
 Annals of Probab
"... Let {Xij}, i,j =..., be a double array of i.i.d. complex random variables with EX11 = 0,EX11  2 = 1 and EX11  4 < ∞, and let An = 1 1/2 ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Let {Xij}, i,j =..., be a double array of i.i.d. complex random variables with EX11 = 0,EX11  2 = 1 and EX11  4 < ∞, and let An = 1 1/2
Multivariate analysis and Jacobi ensembles: Largest eigenvalue, Tracy–Widom limits and rates of convergence
 Ann. Statist
, 2008
"... Let A and B be independent, central Wishart matrices in p variables with common covariance and having m and n degrees of freedom, respectively. The distribution of the largest eigenvalue of (A+B) −1 B has numerous applications in multivariate statistics, but is difficult to calculate exactly. Suppos ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Let A and B be independent, central Wishart matrices in p variables with common covariance and having m and n degrees of freedom, respectively. The distribution of the largest eigenvalue of (A+B) −1 B has numerous applications in multivariate statistics, but is difficult to calculate exactly. Suppose that m and n grow in proportion to p. We show that after centering and scaling, the distribution is approximated to secondorder, O(p −2/3), by the Tracy–Widom law. The results are obtained for both complex and then realvalued data by using methods of random matrix theory to study the largest eigenvalue of the Jacobi unitary and orthogonal ensembles. Asymptotic approximations of Jacobi polynomials near the largest zero play a central role. 1. Introduction. It
Limiting Spectral Distributions of Large Dimensional Random Matrices
"... Models where the number of parameters increases with the sample size, are becoming increasingly important in statistics. This necessitates a close look at the statistical properties of eigenvalues of random matrices whose dimension increases indefinitely. There are several properties of the eigenval ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Models where the number of parameters increases with the sample size, are becoming increasingly important in statistics. This necessitates a close look at the statistical properties of eigenvalues of random matrices whose dimension increases indefinitely. There are several properties of the eigenvalues that one would be interested in and the literature in this area is already huge. In this article we focus on one important aspect: the existence and identification of the limiting spectral distribution (LSD) of the empirical distribution of the eigenvalues. We describe some of the general tools used in establishing the LSD and how they have been applied successfully to establish results on the LSD for certain types of matrices. Some of the matrices for which the LSD has been established and the nature of the limit laws known are described in detail. We also discuss a few open problems and partial solutions for some of these. We introduce a few new ideas which seem to hold some promise in this area. We also establish an invariance result for random Toeplitz matrix.
Limiting Spectral Distributions of Large Dimensional Random Matrices Arup Bose ∗
"... Models where the number of parameters increases with the sample size, are becoming increasingly important in statistics. This necessitates a close look at the statistical properties of eigenvalues of random matrices whose dimension increases indefinitely. There are several properties of the eigenval ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Models where the number of parameters increases with the sample size, are becoming increasingly important in statistics. This necessitates a close look at the statistical properties of eigenvalues of random matrices whose dimension increases indefinitely. There are several properties of the eigenvalues that one would be interested in and the literature in this area is already huge. In this article we focus on one important aspect: the existence and identification of the limiting spectral distribution (LSD) of the empirical distribution of the eigenvalues. We describe some of the general tools used in establishing the LSD and how they have been applied successfully to establish results on the LSD for certain types of matrices. Some of the matrices for which the LSD has been established and the nature of the limit laws known are described in detail. We also discuss a few open problems and partial solutions for some of these. We introduce a few new ideas which seem to hold some promise in this area. We also establish an invariance result for random Toeplitz matrix. Keywords: Large dimensional random matrix, eigenvalues, limiting spectral distribution, MarčenkoPastur law, semicircular law, circular law, Wigner matrix, sample variance covariance matrix, F ma
The convergence of the empirical distribution of canonical correlation coefficients
"... Suppose that {Xjk, j = 1, · · · , p1; k = 1, · · · , n} are independent and identically distributed (i.i.d) real random variables with EX11 = 0 and EX 2 11 = 1, and that {Yjk, j = 1, · · · , p2; k = 1, · · · , n} are i.i.d real random variables with EY11 = 0 and EY 2 11 = 1, and that {Xj ..."
Abstract
 Add to MetaCart
Suppose that {Xjk, j = 1, · · · , p1; k = 1, · · · , n} are independent and identically distributed (i.i.d) real random variables with EX11 = 0 and EX 2 11 = 1, and that {Yjk, j = 1, · · · , p2; k = 1, · · · , n} are i.i.d real random variables with EY11 = 0 and EY 2 11 = 1, and that {Xjk, j = 1, · · · , p1; k = 1, · · · , n} are independent of {Yjk, j = 1, · · · , p2; k = 1, · · · , n}. This paper investigates the canonical correlation coefficients r1 ≥ r2 ≥ · · · ≥ rp1, whose squares λ1 = r2 1, λ2 = r 2 2, · · · , λp1 = r 2 p1 are the eigenvalues of the matrix where and
MATRICES ALÉATOIRES ET APPLICATIONS AUX COMMUNICATIONS NUMÉRIQUES
, 2010
"... C’est avec un grand plaisir que je dédie cette page à toutes les personnes qui ont contribuées de près ou de loin à la réussite de ma thèse. Les premières personnes que je tiens à remercier sont Walid Hachem et Jamal Najim, mes directeurs de thèse. Je les remercie pour l’aide scientifique précieuse, ..."
Abstract
 Add to MetaCart
C’est avec un grand plaisir que je dédie cette page à toutes les personnes qui ont contribuées de près ou de loin à la réussite de ma thèse. Les premières personnes que je tiens à remercier sont Walid Hachem et Jamal Najim, mes directeurs de thèse. Je les remercie pour l’aide scientifique précieuse, pour leur disponibilité et pour leurs conseils avisés. Qu’ils trouvent ici mes profonds remerciements et reconnaissance. Je suis également reconnaissante à Ahmed Elkharroubi d’avoir assuré mon encadrement au sein du département de Mathématiques et Informatique à l’Université Hassan II Ain