Results 1 
9 of
9
ANCESTRAL GRAPH MARKOV MODELS
, 2002
"... This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of verti ..."
Abstract

Cited by 95 (18 self)
 Add to MetaCart
This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of vertices; every missing edge corresponds to an independence relation. These features lead to a simple parameterization of the corresponding set of distributions in the Gaussian case.
Dimension Correction for Hierarchical Latent Class Models
, 2002
"... Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When hidden variables are present, however, standard dime ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When hidden variables are present, however, standard dimension might no longer be appropriate.
Effective Dimensions of Hierarchical Latent Class Models
 Journal of Artificial Intelligence Research
, 2002
"... Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a reasonable criterion to use in practice for learning HLC models. Empirical studies also suggest that sometimes model selection can be improved if standard model dimension is replaced with effective model dimension in the penalty term of the BIC score.
Perfect treelike Markovian distributions
 In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
, 2000
"... We show that if a strictly positive joint probability distribution for a set of binary random variables factors according to a tree, then vertex separation represents all and only the independence relations encoded in the distribution. The same result is shown to hold also for multivariate strictly ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We show that if a strictly positive joint probability distribution for a set of binary random variables factors according to a tree, then vertex separation represents all and only the independence relations encoded in the distribution. The same result is shown to hold also for multivariate strictly positive normal distributions. Our proof uses a new property of conditional independence that holds for these two classes of probability distributions. 1
Effective Dimensions of Partially Observed Polytrees
 In Proceedings of the Seventh European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
, 2003
"... Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When latent variables are present, however, the standard dimensi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When latent variables are present, however, the standard dimension might no longer be appropriate. Instead, an effective dimension should be used [5]. Zhang & Kocka [13] showed how to compute the effective dimensions of partially observed trees. In this paper we solve the same problem for partially observed polytrees.
BY
"... Abstract. We show that if a strictly positive joint probability distribution for a set of binary variables factors according to a tree, then vertex separation represents all and only the independence relations encoded in the distribution. The same result is shown to hold also for multivariate nondeg ..."
Abstract
 Add to MetaCart
Abstract. We show that if a strictly positive joint probability distribution for a set of binary variables factors according to a tree, then vertex separation represents all and only the independence relations encoded in the distribution. The same result is shown to hold also for multivariate nondegenerate normal distributions. Our proof uses a new property of conditional independence that holds for these two classes of probability distributions. AMS Mathematics Subject Classification: 60E05. Key words and phrases: Conditional independence, graphical models, Markov models. 1.
unknown title
, 2003
"... We consider models for the covariance between two blocks of variables. Such models are often used in situations where latent variables are believed to present. In this paper we characterize exactly the set of distributions given by a class of models with onedimensional latent variables. These model ..."
Abstract
 Add to MetaCart
(Show Context)
We consider models for the covariance between two blocks of variables. Such models are often used in situations where latent variables are believed to present. In this paper we characterize exactly the set of distributions given by a class of models with onedimensional latent variables. These models relate two blocks of observed variables, modeling only the crosscovariance matrix. We describe the relation of this model to the singular value decomposition of the crosscovariance matrix. We show that, although the model is underidentified, useful information may be extracted. We further consider an alternative parameterization in which one latent variable is associated with each block, and we extend the result to models with rdimensional latent variables.
unknown title
"... Abstract Hierarchical latent class (HLC) models are treestructured Bayesian networks whereleaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks withlatent nodes in general. Nonet ..."
Abstract
 Add to MetaCart
Abstract Hierarchical latent class (HLC) models are treestructured Bayesian networks whereleaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks withlatent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a reasonable criterion to use in practice for learning HLC models. Empirical studies alsosuggest that sometimes model selection can be improved if standard model dimension is replaced with effective model dimension in the penalty term of the BIC score. Effective dimensions are difficult to compute. In this paper, we prove a theorem thatrelates the effective dimension of an HLC model to the effective dimensions of a number