Results 1 
6 of
6
ANCESTRAL GRAPH MARKOV MODELS
, 2002
"... This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of verti ..."
Abstract

Cited by 76 (18 self)
 Add to MetaCart
This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of vertices; every missing edge corresponds to an independence relation. These features lead to a simple parameterization of the corresponding set of distributions in the Gaussian case.
Dimension Correction for Hierarchical Latent Class Models
, 2002
"... Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When hidden variables are present, however, standard dime ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When hidden variables are present, however, standard dimension might no longer be appropriate.
Effective Dimensions of Hierarchical Latent Class Models
 Journal of Artificial Intelligence Research
, 2002
"... Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a reasonable criterion to use in practice for learning HLC models. Empirical studies also suggest that sometimes model selection can be improved if standard model dimension is replaced with effective model dimension in the penalty term of the BIC score.
Effective Dimensions of Partially Observed Polytrees
 In Proceedings of the Seventh European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
, 2003
"... Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When latent variables are present, however, the standard dimensi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When latent variables are present, however, the standard dimension might no longer be appropriate. Instead, an effective dimension should be used [5]. Zhang & Kocka [13] showed how to compute the effective dimensions of partially observed trees. In this paper we solve the same problem for partially observed polytrees.
BY
"... Abstract. We show that if a strictly positive joint probability distribution for a set of binary variables factors according to a tree, then vertex separation represents all and only the independence relations encoded in the distribution. The same result is shown to hold also for multivariate nondeg ..."
Abstract
 Add to MetaCart
Abstract. We show that if a strictly positive joint probability distribution for a set of binary variables factors according to a tree, then vertex separation represents all and only the independence relations encoded in the distribution. The same result is shown to hold also for multivariate nondegenerate normal distributions. Our proof uses a new property of conditional independence that holds for these two classes of probability distributions. AMS Mathematics Subject Classification: 60E05. Key words and phrases: Conditional independence, graphical models, Markov models. 1.