Results 1 
4 of
4
Learning Graphical Models With Hubs
"... We consider the problem of learning a highdimensional graphical model in which certain hub nodes are highlyconnected to many other nodes. Many authors have studied the use of an `1 penalty in order to learn a sparse graph in the highdimensional setting. However, the `1 penalty implicitly assumes ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning a highdimensional graphical model in which certain hub nodes are highlyconnected to many other nodes. Many authors have studied the use of an `1 penalty in order to learn a sparse graph in the highdimensional setting. However, the `1 penalty implicitly assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a rowcolumn overlap norm penalty. We apply this general framework to three widelyused probabilistic graphical models: the Gaussian graphical model, the covariance graph model, and the binary Ising model. An alternating direction method of multipliers algorithm is used to solve the corresponding convex optimization problems. On synthetic data, we demonstrate that our proposed framework outperforms competitors that do not explicitly model hub nodes. We illustrate our proposal on a webpage data set and a gene expression data set.
Maximization of Approximately Submodular Functions
"... Abstract We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that F is εap ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that F is εapproximately submodular if there exists a submodular function f such that (1−ε)f (S) ≤ F (S) ≤ (1+ε)f (S) for all subsets S. We are interested in characterizing the querycomplexity of maximizing F subject to a cardinality constraint k as a function of the error level ε > 0. We provide both lower and upper bounds: for ε > n −1/2 we show an exponential querycomplexity lower bound. In contrast, when ε < 1/k or under a stronger bounded curvature assumption, we give constant approximation algorithms.
SPARSE CODING WITH A GLOBAL CONNECTIVITY CONSTRAINT
"... ABSTRACT Basis pursuit via sparse coding techniques have generally enforced sparseness by using L1type norms on the coefficients of the bases. When applied to natural scenes these algorithms famously retrieve the Gaborlike basis functions of the primary visual cortex (V1) of the mammalian brain. ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTRACT Basis pursuit via sparse coding techniques have generally enforced sparseness by using L1type norms on the coefficients of the bases. When applied to natural scenes these algorithms famously retrieve the Gaborlike basis functions of the primary visual cortex (V1) of the mammalian brain. In this paper, inspired further by the architecture of the brain, we propose a technique that not only retrieves the Gabor basis but does so respecting global powerlaw type connectivity patterns. Such global constraints are beneficial from a biological perspective in terms of efficient wiring, robustness etc. We draw on the similarity between sparse coding and neural networks to formulate the problem and impose such global connectivity patterns. Index Termssparse coding; scalefree networks; biologically inspired
Inferring Block Structure of Graphical Models in Exponential Families
"... Learning the structure of a graphical model is a fundamental problem and it is used extensively to infer the relationship between random variables. In many real world applications, we usually have some prior knowledge about the underlying graph structure, such as degree distribution and block str ..."
Abstract
 Add to MetaCart
(Show Context)
Learning the structure of a graphical model is a fundamental problem and it is used extensively to infer the relationship between random variables. In many real world applications, we usually have some prior knowledge about the underlying graph structure, such as degree distribution and block structure. In this paper, we propose a novel generative model for describing the block structure in general exponential families, and optimize it by an ExpectationMaximization(EM) algorithm with variational Bayes. Experimental results show that our method performs well on both synthetic and real data. Furthermore, our method can predict overlapping block structure of a graphical model in general exponential families. 1