Results 1  10
of
16
Sparse inverse covariance selection via alternating linearization methods
, 2010
"... Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse co ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
(Show Context)
Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an ℓ1regularization term. In this paper, we propose a firstorder method based on an alternating linearization technique that exploits the problem’s special structure; in particular, the subproblems solved in each iteration have closedform solutions. Moreover, our algorithm obtains an ϵoptimal solution in O(1/ϵ) iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of this algorithm outperforms other competitive algorithms.
Sparse Brain Network Recovery under Compressed Sensing
, 2010
"... Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding effects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the smalln largep situation, a sparseness constraint is gene ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding effects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the smalln largep situation, a sparseness constraint is generally introduced. In this paper, we consider the sparse linear regression model with a l1norm penalty, a.k.a., least absolute shrinkage and selection operator (LASSO), for estimating sparse brain connectivity. LASSO is a wellknown decoding algorithm in the compressed sensing (CS). The CS theory states that LASSO can reconstruct the exact sparse signal even from a small set of noisy measurements. We briefly show that the penalized linear regression for partial correlation estimation is related with CS. It opens a new possibility that the proposed framework can be used for a sparse brain network recovery. As an illustration, we construct sparse brain networks of 97 regions of interest (ROIs) obtained from FDGPET data for the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. As a model validation, we check their reproducibilities by leaveoneout cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon. Keywords: Brain Connectivity, Compressed Sensing, Partial Correlation, LASSO. 1
NeuroImage 61 (2012) 622–632 Contents lists available at SciVerse ScienceDirect
"... journal homepage: www.elsevier.com/locate/ynimg Multisource feature learning for joint analysis of incomplete multiple heterogeneous ..."
Abstract
 Add to MetaCart
(Show Context)
journal homepage: www.elsevier.com/locate/ynimg Multisource feature learning for joint analysis of incomplete multiple heterogeneous
1 Sparse Brain Network Recovery under Compressed Sensing
"... Abstract—Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding effects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the smalln largep situation, a sparseness constrain ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding effects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the smalln largep situation, a sparseness constraint is generally introduced. In this paper, we consider the sparse linear regression model with a l1norm penalty, also known as the least absolute shrinkage and selection operator (LASSO), for estimating sparse brain connectivity. LASSO is a wellknown decoding algorithm in the compressed sensing (CS). The CS theory states that LASSO can reconstruct the exact sparse signal even from a small set of noisy measurements. We briefly show that the penalized linear regression for partial correlation estimation is related to CS. It opens a new possibility that the proposed framework can be used for a sparse brain network recovery. As an illustration, we construct sparse brain networks of 97 regions of interest (ROIs) obtained from FDGPET imaging data for the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. As validation, we check the network reproducibilities by leaveoneout cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.
Contents lists available at SciVerse ScienceDirect
"... journal homepage: www.elsevier.com/locate/ynimg 1 Multisource feature learning for joint analysis of incomplete multiple heterogeneous ..."
Abstract
 Add to MetaCart
(Show Context)
journal homepage: www.elsevier.com/locate/ynimg 1 Multisource feature learning for joint analysis of incomplete multiple heterogeneous
unknown title
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Abstract
 Add to MetaCart
(Show Context)
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright Author's personal copy
Discriminative Brain Network Using Sparse Regression
"... Abstract. The correlation and the partial correlation are widely used for measuring connectivity of an undirected brain network. It is known that the brain network has a smallworld and a scalefree topology, but its structure drastically changes depending on the criterion of how to threshold correl ..."
Abstract
 Add to MetaCart
Abstract. The correlation and the partial correlation are widely used for measuring connectivity of an undirected brain network. It is known that the brain network has a smallworld and a scalefree topology, but its structure drastically changes depending on the criterion of how to threshold correlations. The exact threshold criterion has not been known yet, except for the statistical significance which is usually determined heuristically. In this paper, we propose a novel framework for automatically determining the threshold based on the clustered structure of the network. By building sparse linear regression framework on correlations, we can utilize the inherent sparseness of the brain network and thus making the threshold easy to determine. We will show that our proposed method finds the biologically meaningful connectivity by making the best representation of the data characteristics and it discriminates brain networks between groups very well. 1
On The Equivalent of LowRank Regressions and Linear Discriminant Analysis Based Regressions
"... The lowrank regression model has been studied and applied to capture the underlying classes/tasks correlation patterns, such that the regression/classification results can be enhanced. In this paper, we will prove that the lowrank regression model is equivalent to doing linear regression in the ..."
Abstract
 Add to MetaCart
(Show Context)
The lowrank regression model has been studied and applied to capture the underlying classes/tasks correlation patterns, such that the regression/classification results can be enhanced. In this paper, we will prove that the lowrank regression model is equivalent to doing linear regression in the linear discriminant analysis (LDA) subspace. Our new theory reveals the learning mechanism of lowrank regression, and shows that the lowrank structures exacted from classes/tasks are connected to the LDA projection results. Thus, the lowrank regression efficiently works for the highdimensional data. Moreover, we will propose new discriminant lowrank ridge regression and sparse lowrank regression methods. Both of them are equivalent to doing regularized regression in the regularized LDA subspace. These new regularized objectives provide better data mining results than existing lowrank regression in both theoretical and empirical validations. We evaluate our discriminant lowrank regression methods by six benchmark datasets. In all empirical results, our discriminant lowrank models consistently show better results than the corresponding fullrank methods.