Results 1 
7 of
7
Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation
"... The ℓ1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm f ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
The ℓ1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized logdeterminant program. In contrast to other stateoftheart methods that largely use first order gradient information, our algorithm is based on Newton’s method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and also present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other stateoftheart methods. 1
Sparse inverse covariance selection via alternating linearization methods
"... Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse co ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Gaussian graphical models are of great interest in statistical learning. Because the conditional independencies between different nodes correspond to zero entries in the inverse covariance matrix of the Gaussian distribution, one can learn the structure of the graph by estimating a sparse inverse covariance matrix from sample data, by solving a convex maximum likelihood problem with an ℓ1regularization term. In this paper, we propose a firstorder method based on an alternating linearization technique that exploits the problem’s special structure; in particular, the subproblems solved in each iteration have closedform solutions. Moreover, our algorithm obtains an ϵoptimal solution in O(1/ϵ) iterations. Numerical experiments on both synthetic and real data from gene association networks show that a practical version of this algorithm outperforms other competitive algorithms. 1
NewtonLike Methods for Sparse Inverse Covariance Estimation
, 2012
"... We propose two classes of secondorder optimization methods for solving the sparse inverse covariance estimation problem. The first approach, which we call the NewtonLASSO method, minimizes a piecewise quadratic model of the objective function at every iteration to generate a step. We employ the fa ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We propose two classes of secondorder optimization methods for solving the sparse inverse covariance estimation problem. The first approach, which we call the NewtonLASSO method, minimizes a piecewise quadratic model of the objective function at every iteration to generate a step. We employ the fast iterative shrinkage thresholding method (FISTA) to solve this subproblem. The second approach, which we call the OrthantBased Newton method, is a twophase algorithm that first identifies an orthant face and then minimizes a smooth quadratic approximation of the objective function using the conjugate gradient method. These methods exploit the structure of the Hessian to efficiently compute the search direction and to avoid explicitly storing the Hessian. We show that quasiNewton methods are also effective in this context, and describe a limited memory BFGS variant of the orthantbased Newton method. We present numerical results that suggest that all the techniques described in this paper have attractive properties and constitute useful tools for solving the sparse inverse covariance estimation problem. Comparisons with the method implemented in the QUIC software package [1] are presented. 1
Penalty Decomposition Methods for l0Norm Minimization ∗
, 2010
"... In this paper we consider general l0norm minimization problems, that is, the problems with l0norm appearing in either objective function or constraint. In particular, we first reformulate the l0norm constrained problem as an equivalent rank minimization problem and then apply the penalty decompos ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper we consider general l0norm minimization problems, that is, the problems with l0norm appearing in either objective function or constraint. In particular, we first reformulate the l0norm constrained problem as an equivalent rank minimization problem and then apply the penalty decomposition (PD) method proposed in [33] to solve the latter problem. By utilizing the special structures, we then transform all matrix operations of this method to vector operations and obtain a PD method that only involves vector operations. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD method satisfies a firstorder optimality condition that is generally stronger than one natural optimality condition. We further extend the PD method to solve the problem with the l0norm appearing in objective function. Finally, we test the performance of our PD methods by applying them to compressed sensing, sparse logistic regression and sparse inverse covariance selection. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. Key words: l0norm minimization, penalty decomposition methods, compressed sensing, sparse logistic regression, sparse inverse covariance selection 1
A DivideandConquer Procedure for Sparse Inverse Covariance Estimation
"... We consider the composite logdeterminant optimization problem, arising from the ℓ1 regularized Gaussian maximum likelihood estimator of a sparse inverse covariance matrix, in a highdimensional setting with a very large number of variables. Recent work has shown this estimator to have strong statis ..."
Abstract
 Add to MetaCart
We consider the composite logdeterminant optimization problem, arising from the ℓ1 regularized Gaussian maximum likelihood estimator of a sparse inverse covariance matrix, in a highdimensional setting with a very large number of variables. Recent work has shown this estimator to have strong statistical guarantees in recovering the true structure of the sparse inverse covariance matrix, or alternatively the underlying graph structure of the corresponding Gaussian Markov Random Field, even in very highdimensional regimes with a limited number of samples. In this paper, we are concerned with the computational cost in solving the above optimization problem. Our proposed algorithm partitions the problem into smaller subproblems, and uses the solutions of the subproblems to build a good approximation for the original problem. Our key idea for the divide step to obtain a subproblem partition is as follows: we first derive a tractable bound on the quality of the approximate solution obtained from solving the corresponding subdivided problems. Based on this bound, we propose a clustering algorithm that attempts to minimize this bound, in order to find effective partitions of the variables. For the conquer step, we use the approximate solution, i.e., solution resulting from solving the subproblems, as an initial point to solve the original problem, and thereby achieve a much faster computational procedure. 1
unknown title
"... An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP ..."
Abstract
 Add to MetaCart
An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP