Results 1 
5 of
5
The hidden life of latent variables: Bayesian learning with mixed graph models
, 2008
"... Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of D ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of DAGs is not closed under marginalization of hidden variables. This means that in general we cannot use a DAG to represent the independencies over a subset of variables in a larger DAG. Directed mixed graphs (DMGs) are a representation that includes DAGs as a special case, and overcomes this limitation. This paper introduces algorithms for performing Bayesian inference in Gaussian and probit DMG models. An important requirement for inference is the characterization of the distribution over parameters of the models. We introduce a new distribution for covariance matrices of Gaussian DMGs. We discuss and illustrate how several Bayesian machine learning tasks can benefit from the principle presented here: the power to model dependencies that are generated from hidden variables, but without necessarily modelling such variables explicitly.
Clique Matrices for Statistical Graph Decomposition and Parameterising Restricted Positive Definite Matrices
"... We introduce Clique Matrices as an alternative representation of undirected graphs, being a generalisation of the incidence matrix representation. Here we use clique matrices to decompose a graph into a set of possibly overlapping clusters, defined as wellconnected subsets of vertices. The decomposi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We introduce Clique Matrices as an alternative representation of undirected graphs, being a generalisation of the incidence matrix representation. Here we use clique matrices to decompose a graph into a set of possibly overlapping clusters, defined as wellconnected subsets of vertices. The decomposition is based on a statistical description which encourages clusters to be well connected and few in number. Inference is carried out using a variational approximation. Clique matrices also play a natural role in parameterising positive definite matrices under zero constraints on elements of the matrix. We show that clique matrices can parameterise all positive definite matrices restricted according to a decomposable graph and form a structured Factor Analysis approximation in the nondecomposable case. 1
Identifying Graph Clusters using Variational Inference and links to
"... Finding clusters of wellconnected nodes in a graph is useful in many domains, including Social Network, Web and molecular interaction analyses. From a computational viewpoint, finding these clusters or graph communities is a difficult problem. We consider the framework of Clique Matrices to decompo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Finding clusters of wellconnected nodes in a graph is useful in many domains, including Social Network, Web and molecular interaction analyses. From a computational viewpoint, finding these clusters or graph communities is a difficult problem. We consider the framework of Clique Matrices to decompose a graph into a set of possibly overlapping clusters, defined as wellconnected subsets of vertices. The decomposition is based on a statistical description which encourages clusters to be well connected and few in number. The formal intractability of inferring the clusters is addressed using a variational approximation which has links to meanfield theories in statistical mechanics. Clique matrices also play a natural role in parameterising positive definite matrices under zero constraints on elements of the matrix. We show that clique matrices can parameterise all positive definite matrices restricted according to a decomposable graph and form a structured Factor Analysis approximation in the nondecomposable case.
Bayesian Inference for Discrete Mixed Graph Models: Normit Networks, Observable Independencies and Infinite Mixtures
"... Directed mixed graphs are graphical representations that include directed and bidirected edges. Such a class is motivated by dependencies that arise when hidden common causes are marginalized out of a distribution. In previous work, we introduced an efficient Monte Carlo algorithm for sampling from ..."
Abstract
 Add to MetaCart
Directed mixed graphs are graphical representations that include directed and bidirected edges. Such a class is motivated by dependencies that arise when hidden common causes are marginalized out of a distribution. In previous work, we introduced an efficient Monte Carlo algorithm for sampling from Gaussian mixed graph models. An analogous model for discrete distributions is likely to be doublyintractable, in the sense that even a single Markov Chain Monte Carlo step might have a computational cost that scales exponentially with the number of variables. Instead, we built upon our results on Gaussian distributions to describe algorithms and priors for discrete binary and ordinal modeling. The models we describe are based on link functions, where a multivariate Gaussian distribution encoded by a mixed graph is projected into a discrete space. In order to account for flexible discrete distributions, we embed this model within a Dirichlet process mixture of Gaussians. 1
unknown title
"... Principled selection of impure measures for consistent learning of linear latent variable models In previous work, we have developed a principled way of learning the causal structure of linear latent variable models (Silva et al., 2006). However, we have considered the case for models with pure meas ..."
Abstract
 Add to MetaCart
Principled selection of impure measures for consistent learning of linear latent variable models In previous work, we have developed a principled way of learning the causal structure of linear latent variable models (Silva et al., 2006). However, we have considered the case for models with pure measures only. Pure measures are observed variables that measure no more than one latent variable. This paper presents theoretical extensions that justify the selection of some types of impure measures, allowing us to discover hidden variables that could not be identified in the previous case. 1