Results 11  20
of
66
Discrete chain graph models
 Bernoulli
, 2009
"... The statistical literature discusses different types of Markov properties for chain graphs that lead to four possible classes of chain graph Markov models. The different models are rather well understood when the observations are continuous and multivariate normal, and it is also known that one mode ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The statistical literature discusses different types of Markov properties for chain graphs that lead to four possible classes of chain graph Markov models. The different models are rather well understood when the observations are continuous and multivariate normal, and it is also known that one model class, referred to as models of LWF (Lauritzen–Wermuth–Frydenberg) or block concentration type, yields discrete models for categorical data that are smooth. This paper considers the structural properties of the discrete models based on the three alternative Markov properties. It is shown by example that two of the alternative Markov properties can lead to nonsmooth models. The remaining model class, which can be viewed as a discrete version of multivariate regressions, is proven to comprise only smooth models. The proof employs a simple change of coordinates that also reveals that the model’s likelihood function is unimodal if the chain components of the graph are complete sets.
2005) Towards Characterizing Markov Equivalence Classes of Directed Acyclic Graphs with Latent Variables. UAI
 Proceedings of the 21th Conference on Uncertainty in Artificial Intelligence, AUAI
, 2005
"... It is well known that there may be many causal explanations that are consistent with a given set of data. Recent work has been done to represent the common aspects of these explanations into one representation. In this paper, we address what is less well known: how do the relationships common to eve ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
It is well known that there may be many causal explanations that are consistent with a given set of data. Recent work has been done to represent the common aspects of these explanations into one representation. In this paper, we address what is less well known: how do the relationships common to every causal explanation among the observed variables of some DAG process change in the presence of latent variables? Ancestral graphs provide a class of graphs that can encode conditional independence relations that arise in DAG models with latent and selection variables. In this paper we present a set of orientation rules that construct the Markov equivalence class representative for ancestral graphs, given a member of the equivalence class. These rules are sound and complete. We also show that when the equivalence class includes a DAG, the equivalence class representative is the essential graph for the said DAG.
Graphical methods for efficient likelihood inference in gaussian covariance models
 Journal of Machine Learning
, 2008
"... Abstract. In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph. We show how to transform a bidirected graph into a maximal ancestral graph that (i) represents the same independence structure as the origi ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph. We show how to transform a bidirected graph into a maximal ancestral graph that (i) represents the same independence structure as the original bidirected graph, and (ii) minimizes the number of arrowheads among all ancestral graphs satisfying (i). Here the number of arrowheads of an ancestral graph is the number of directed edges plus twice the number of bidirected edges. In Gaussian models, this construction can be used for more efficient iterative maximization of the likelihood function and to determine when maximum likelihood estimates are equal to empirical counterparts. 1.
Estimating highdimensional intervention effects from observation data. The Ann
 of Stat
"... We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We assume that we have observational data generated from an unknown underlying directed acyclic graph (DAG) model. A DAG is typically not identifiable from observational data, but it is possible to consistently estimate the equivalence class of a DAG. Moreover, for any given DAG, causal effects can be estimated using intervention calculus. In this paper, we combine these two parts. For each DAG in the estimated equivalence class, we use intervention calculus to estimate the causal effects of the covariates on the response. This yields a collection of estimated causal effects for each covariate. We show that the distinct values in this set can be consistently estimated by an algorithm that uses only local information of the graph. This local approach is computationally fast and feasible in highdimensional problems. We propose to use summary measures of the set of possible causal effects to determine variable importance. In particular, we use the minimum absolute value of this set, since that is a lower bound on the size of the causal effect. We demonstrate the merits of our methods in a simulation study and on a data set about riboflavin production. 1. Introduction. Our
The hidden life of latent variables: Bayesian learning with mixed graph models
, 2008
"... Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of D ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of DAGs is not closed under marginalization of hidden variables. This means that in general we cannot use a DAG to represent the independencies over a subset of variables in a larger DAG. Directed mixed graphs (DMGs) are a representation that includes DAGs as a special case, and overcomes this limitation. This paper introduces algorithms for performing Bayesian inference in Gaussian and probit DMG models. An important requirement for inference is the characterization of the distribution over parameters of the models. We introduce a new distribution for covariance matrices of Gaussian DMGs. We discuss and illustrate how several Bayesian machine learning tasks can benefit from the principle presented here: the power to model dependencies that are generated from hidden variables, but without necessarily modelling such variables explicitly.
Algebraic Techniques for Gaussian Models
, 2006
"... Abstract: Many statistical models are algebraic in that they are defined by polynomial constraints or by parameterizations that are polynomial or rational maps. This opens the door for tools from computational algebraic geometry. These tools can be employed to solve equation systems arising in maxim ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract: Many statistical models are algebraic in that they are defined by polynomial constraints or by parameterizations that are polynomial or rational maps. This opens the door for tools from computational algebraic geometry. These tools can be employed to solve equation systems arising in maximum likelihood estimation and parameter identification, but they also permit to study model singularities at which standard asymptotic approximations to the distribution of estimators and test statistics may no longer be valid. This paper demonstrates such applications of algebraic geometry in selected examples of Gaussian models, thereby complementing the existing literature on models for discrete variables. MSC 2000: 62H05, 62H12 Key words: Algebraic statistics, multivariate normal distribution, parameter identification, singularities 1
Bayesian inference for Gaussian mixed graph models
 Proceedings of 22nd Conference on Uncertainty in Artificial Intelligence
, 2006
"... We introduce priors and algorithms to perform Bayesian inference in Gaussian models defined by acyclic directed mixed graphs. Such a class of graphs, composed of directed and bidirected edges, is a representation of conditional independencies that is closed under marginalization and arises naturall ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We introduce priors and algorithms to perform Bayesian inference in Gaussian models defined by acyclic directed mixed graphs. Such a class of graphs, composed of directed and bidirected edges, is a representation of conditional independencies that is closed under marginalization and arises naturally from causal models which allow for unmeasured confounding. Monte Carlo methods and a variational approximation for such models are presented. Our algorithms for Bayesian inference allow the evaluation of posterior distributions for several quantities of interest, including causal effects that are not identifiable from data alone but could otherwise be inferred where informative prior knowledge about confounding is available. 1
A theoretical study of Y structures for causal discovery
 Proceedings of the Conference on Uncertainty in Artificial Intelligence
, 2006
"... Causal discovery from observational data in the presence of unobserved variables is challenging. Identification of socalled Y substructures is a sufficient condition for ascertaining some causal relations in the large sample limit, without the assumption of no hidden common causes. An example of a ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Causal discovery from observational data in the presence of unobserved variables is challenging. Identification of socalled Y substructures is a sufficient condition for ascertaining some causal relations in the large sample limit, without the assumption of no hidden common causes. An example of a Y substructure is A → C, B → C, C → D. This paper describes the first asymptotically reliable and computationally feasible scorebased search for discrete Y structures that does not assume that there are no unobserved common causes. For any parameterization of a directed acyclic graph (DAG) that has scores with the property that any DAG that can represent the distribution beats any DAG that can’t, and for two DAGs that represent the distribution, if one has fewer parameters than the other, the one with the fewest parameter wins. In this framework there is no need to assign scores to causal structures with unobserved common causes. The paper also describes how the existence of a Y structure shows the presence of an unconfounded causal relation, without assuming that there are no hidden common causes. 1
Generating Markov Equivalent Maximal Ancestral Graphs by Single Edge Replacement
 in Proceedings of the 21th Conference on Uncertainty in Artificial Intelligence, AUAI
, 2005
"... Maximal ancestral graphs (MAGs) are used to encode conditional independence relations in DAG models with hidden variables. Different MAGs may represent the same set of conditional independences and are called Markov equivalent. This paper considers MAGs without undirected edges and shows conditions ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Maximal ancestral graphs (MAGs) are used to encode conditional independence relations in DAG models with hidden variables. Different MAGs may represent the same set of conditional independences and are called Markov equivalent. This paper considers MAGs without undirected edges and shows conditions under which an arrow in a MAG can be reversed or interchanged with a bidirected edge so as to yield a Markov equivalent MAG. 1
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.