Results 1  10
of
22
ANCESTRAL GRAPH MARKOV MODELS
, 2002
"... This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of verti ..."
Abstract

Cited by 79 (16 self)
 Add to MetaCart
This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of vertices; every missing edge corresponds to an independence relation. These features lead to a simple parameterization of the corresponding set of distributions in the Gaussian case.
Algebraic Geometry of Bayesian Networks
 Journal of Symbolic Computation
, 2005
"... We study the algebraic varieties defined by the conditional independence statements of Bayesian networks. A complete algebraic classification is given for Bayesian networks on at most five random variables. Hidden variables are related to the geometry of higher secant varieties. 1 ..."
Abstract

Cited by 61 (8 self)
 Add to MetaCart
(Show Context)
We study the algebraic varieties defined by the conditional independence statements of Bayesian networks. A complete algebraic classification is given for Bayesian networks on at most five random variables. Hidden variables are related to the geometry of higher secant varieties. 1
Stratified exponential families: Graphical models and model selection
 ANNALS OF STATISTICS
, 2001
"... ..."
Asymptotic Model Selection for Naive Bayesian Networks
 In Proc. of the 18th Conference on Uncertainty in Artificial Intelligence (UAI02
, 2002
"... We develop a closed form asymptotic formula to compute the marginal likelihood of data given a naive Bayesian network model with two hidden states and binary features. ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
(Show Context)
We develop a closed form asymptotic formula to compute the marginal likelihood of data given a naive Bayesian network model with two hidden states and binary features.
Dimension Correction for Hierarchical Latent Class Models
, 2002
"... Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When hidden variables are present, however, standard dime ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Model complexity is an important factor to consider when selecting among graphical models. When all variables are observed, the complexity of a model can be measured by its standard dimension, i.e. the number of independent parameters. When hidden variables are present, however, standard dimension might no longer be appropriate.
Population Markov Chain Monte Carlo
 Machine Learning
, 2003
"... Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima, requiring random restarts to obtain solutions of acceptable quality. We compare three stochastic search algorithms: a MetropolisHastings Sampler (MHS), an Evolutionary Algorithm (EA), and a new hybrid algorithm called Population Markov Chain Monte Carlo, or popMCMC. PopMCMC uses statistical information from a population of MHSs to inform the proposal distributions for individual samplers in the population. Experimental results show that popMCMC and EAs learn more efficiently than the MHS with no information exchange. Populations of MCMC samplers exhibit more diversity than populations evolving according to EAs not satisfying physicsinspired local reversibility conditions. KEY WORDS: Markov Chain Monte Carlo, MetropolisHastings Algorithm, Graphical Probabilistic Models, Bayesian Networks, Bayesian Learning, Evolutionary Algorithms Machine Learning MCMC Issue 1 5/16/01 1.
Maximum likelihood estimation in latent class models for contingency table data
 In Algebraic and Geometric Methods in Statistics
, 2008
"... 1 page 1 ..."
Automated analytic asymptotic evaluation of the marginal likelihood of latent models
 In C. Meek & U. Kjorulff (Eds.), Proceedings of the 19th Conference on Uncertainty and Artificial Intelligence
, 2003
"... We present two algorithms for analytic asymptotic evaluation of the marginal likelihood of data given a Bayesian network with hidden nodes. As shown by previous work, this evaluation is particularly hard because for these models asymptotic approximation of the marginal likelihood deviates from the s ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We present two algorithms for analytic asymptotic evaluation of the marginal likelihood of data given a Bayesian network with hidden nodes. As shown by previous work, this evaluation is particularly hard because for these models asymptotic approximation of the marginal likelihood deviates from the standard BIC score. Our algorithms compute regular dimensionality drop for latent models and compute the nonstandard approximation formulas for singular statistics for these models. The presented algorithms are implemented in Matlab and Maple and their usage is demonstrated on several examples. In this paper we address the problem of computing anaiytic asymptotic approximations of marginai ii~eiihoods and present two computer programs that compute such approximations. Our algorithms are developed in the context of Bayesian networks with hidden variables, where the evaluation of marginal likelihood was shown to be particularly hard (Rusakov & Geiger, 2002). Consider the evaluation of the marginal likelihood given a Bayesian network model. Under some regularity conditions, the asymptotic form of the log marginal likelihood for Bayesian network models without hidden variables is specified by the standard BIC fornula: 1
Effective Dimensions of Hierarchical Latent Class Models
 Journal of Artificial Intelligence Research
, 2002
"... Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a reasonable criterion to use in practice for learning HLC models. Empirical studies also suggest that sometimes model selection can be improved if standard model dimension is replaced with effective model dimension in the penalty term of the BIC score.