Results 1  10
of
88
Sparsistency and rates of convergence in large covariance matrices estimation
, 2009
"... This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probabi ..."
Abstract

Cited by 110 (12 self)
 Add to MetaCart
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (sn log pn/n) 1/2, where sn is the number of nonzero elements, pn is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of highdimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λn goes to 0 have been made explicit and compared under different penalties. As a result, for the L1penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: s ′ n = O(pn) at most, among O(p2 n) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where s ′ n is the number of the nonzero elements on the offdiagonal entries. On the other hand, using the SCAD or hardthresholding penalty functions, there is no such a restriction.
The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs
"... Recent methods for estimating sparse undirected graphs for realvalued data in high dimensional problems rely heavily on the assumption of normality. We show how to use a semiparametric Gaussian copula—or “nonparanormal”—for high dimensional inference. Just as additive models extend linear models by ..."
Abstract

Cited by 92 (21 self)
 Add to MetaCart
(Show Context)
Recent methods for estimating sparse undirected graphs for realvalued data in high dimensional problems rely heavily on the assumption of normality. We show how to use a semiparametric Gaussian copula—or “nonparanormal”—for high dimensional inference. Just as additive models extend linear models by replacing linear functions with a set of onedimensional smooth functions, the nonparanormal extends the normal by transforming the variables by smooth functions. We derive a method for estimating the nonparanormal, study the method’s theoretical properties, and show that it works well in many examples.
Highdimensional covariance estimation by minimizing ℓ1penalized logdeterminant divergence
, 2008
"... ..."
Optimal detection of sparse principal components in high dimension
, 2013
"... We perform a finite sample analysis of the detection levels for sparse principal components of a highdimensional covariance matrix. Our minimax optimal test is based on a sparse eigenvalue statistic. Alas, computing this test is known to be NPcomplete in general, and we describe a computationally ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
We perform a finite sample analysis of the detection levels for sparse principal components of a highdimensional covariance matrix. Our minimax optimal test is based on a sparse eigenvalue statistic. Alas, computing this test is known to be NPcomplete in general, and we describe a computationally efficient alternative test using convex relaxations. Our relaxation is also proved to detect sparse principal components at near optimal detection levels, and it performs well on simulated datasets. Moreover, using polynomial time reductions from theoretical computer science, we bring significant evidence that our results cannot be improved, thus revealing an inherent trade off between statistical and computational performance.
OPTIMAL RATES OF CONVERGENCE FOR SPARSE COVARIANCE MATRIX ESTIMATION
 SUBMITTED TO THE ANNALS OF STATISTICS
"... This paper considers estimation ofsparse covariance matrices and establishes the optimal rate of convergence under a range of matrix operator norm and Bregman divergence losses. A major focus is on the derivation of a rate sharp minimax lower bound. The problem exhibits new features that are signifi ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
This paper considers estimation ofsparse covariance matrices and establishes the optimal rate of convergence under a range of matrix operator norm and Bregman divergence losses. A major focus is on the derivation of a rate sharp minimax lower bound. The problem exhibits new features that are significantly different from those that occur in the conventional nonparametric function estimation problems. Standard techniques fail to yield good results and new tools are thus needed. We first develop a lower bound technique that is particularly well suited for treating “twodirectional ” problems such as estimating sparse covariance matrices. The result can be viewed as a generalization of Le Cam’s method in one direction and Assouad’s Lemma in another. This lower bound technique is of independent interest and can be used for other matrix estimation problems. We then establish a rate sharp minimax lower bound for estimating sparse covariance matrices under the spectral norm by applying the general lower bound technique. A thresholding estimator is shown to attain the optimal rate of convergence under the spectral norm. The results are then extended to the general matrix ℓw operator norms for 1�w��. In addition, we give a unified result on the minimax rate of convergence for sparse covariance matrix estimation under a class of Bregman divergence losses.
A constrained ℓ1minimization approach to sparse precision matrix estimation
 J. Amer. Statist. Assoc
, 2011
"... ar ..."
Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices
 Ann. Stat
, 2011
"... Testing covariance structure is of significant interest in many areas of statistical analysis and construction of compressed sensing matrices is an important problem in signal processing. Motivated by these applications, we study in this paper the limiting laws of the coherence of an n×p random matr ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
Testing covariance structure is of significant interest in many areas of statistical analysis and construction of compressed sensing matrices is an important problem in signal processing. Motivated by these applications, we study in this paper the limiting laws of the coherence of an n×p random matrix in the highdimensional setting where p can be much larger than n. Both the law of large numbers and the limiting distribution are derived. We then consider testing the bandedness of the covariance matrix of a high dimensional Gaussian distribution which includes testing for independence as a special case. The limiting laws of the coherence of the data matrix play a critical role in the construction of the test. We also apply the asymptotic results to the construction of compressed sensing matrices.
Minimax rates of estimation for sparse PCA in high dimensions
, 2012
"... We study sparse principal components analysis in the highdimensional setting, where p (the number of variables) can be much larger than n (the number of observations). We prove optimal, nonasymptotic lower and upper bounds on the minimax estimation error for the leading eigenvector when it belongs ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
(Show Context)
We study sparse principal components analysis in the highdimensional setting, where p (the number of variables) can be much larger than n (the number of observations). We prove optimal, nonasymptotic lower and upper bounds on the minimax estimation error for the leading eigenvector when it belongs to an ℓq ball for q ∈ [0, 1]. Our bounds are sharp in p and n for all q ∈ [0, 1] over a wide class of distributions. The upper bound is obtained by analyzing the performance of ℓqconstrained PCA. In particular, our results provide convergence rates for ℓ1constrained PCA. 1
Minimax Estimation of Large Covariance Matrices under ℓ1Norm
"... Driven byawide rangeofapplicationsin highdimensionaldata analysis, therehas been significant recent interest in the estimation of large covariance matrices. In this paper, we consideroptimal estimation ofacovariancematrix as well asits inverseover several commonly used parameter spaces under the ma ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Driven byawide rangeofapplicationsin highdimensionaldata analysis, therehas been significant recent interest in the estimation of large covariance matrices. In this paper, we consideroptimal estimation ofacovariancematrix as well asits inverseover several commonly used parameter spaces under the matrix ℓ1 norm. Both minimax lower and upper bounds are derived. The lower bounds are established by using hypothesis testing arguments, where at the core are a novel construction of collections of least favorable multivariate normal distributions and the bounding of the affinities between mixture distributions. The lower bound analysis also provides insight into where the difficulties of the covariance matrix estimation problem arise. A specific thresholding estimator and tapering estimator are constructed and shown to be minimax rate optimal. The optimal rates of convergence established in the paper can serve as a benchmark for the performance of covariance matrix estimation methods.
Minimax bounds for sparse PCA with noisy highdimensional data
 ANN. STATIST
, 2013
"... We study the problem of estimating the leading eigenvectors of a highdimensional population covariance matrix based on independent Gaussian observations. We establish a lower bound on the minimax risk of estimators under the l2 loss, in the joint limit as dimension and sample size increase to infin ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
We study the problem of estimating the leading eigenvectors of a highdimensional population covariance matrix based on independent Gaussian observations. We establish a lower bound on the minimax risk of estimators under the l2 loss, in the joint limit as dimension and sample size increase to infinity, under various models of sparsity for the population eigenvectors. The lower bound on the risk points to the existence of different regimes of sparsity of the eigenvectors. We also propose a new method for estimating the eigenvectors by a twostage coordinate selection scheme.