Results 11 
18 of
18
Sequences of regressions and their independences
, 2012
"... Ordered sequences of univariate or multivariate regressions provide statistical modelsfor analysingdata fromrandomized, possiblysequential interventions, from cohort or multiwave panel studies, but also from crosssectional or retrospective studies. Conditional independences are captured by what we ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Ordered sequences of univariate or multivariate regressions provide statistical modelsfor analysingdata fromrandomized, possiblysequential interventions, from cohort or multiwave panel studies, but also from crosssectional or retrospective studies. Conditional independences are captured by what we name regression graphs, provided the generated distribution shares some properties with a joint Gaussian distribution. Regression graphs extend purely directed, acyclic graphs by two types of undirected graph, one type for components of joint responses and the other for components of the context vector variable. We review the special features and the history of regression graphs, prove criteria for Markov equivalence anddiscussthenotion of simpler statistical covering models. Knowledgeof Markov equivalence provides alternative interpretations of a given sequence of regressions, is essential for machine learning strategies and permits to use the simple graphical criteria of regression graphs on graphs for which the corresponding criteria are in general more complex. Under the known conditions that a Markov equivalent directed acyclic graph exists for any given regression graph, we give a polynomial time algorithm to find one such graph.
Probability distributions with summary graph structure
, 2008
"... A joint density of many variables may satisfy a possibly large set of independence statements, called its independence structure. Often the structure of interest is representable by a graph that consists of nodes representing variables and of edges that couple node pairs. We consider joint densities ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A joint density of many variables may satisfy a possibly large set of independence statements, called its independence structure. Often the structure of interest is representable by a graph that consists of nodes representing variables and of edges that couple node pairs. We consider joint densities of this type, generated by a stepwise process in which all variables and dependences of interest are included. Otherwise, there are no constraints on the type of variables or on the form of the generating conditional densities. For the joint density that then results after marginalising and conditioning, we derive what we name the summary graph. It is seen to capture precisely the independence structure implied by the generating process, it identifies dependences which remain undistorted due to direct or indirect confounding and it alerts to such, possibly severe distortions in other parametrizations. Summary graphs preserve their form after marginalising and conditioning and they include multivariate regression chain graphs as special cases. We use operators for matrix representations of graphs to derive matrix results and translate these into special types of path. 1. Introduction. Graphical Markov
W.P.: Parameterizations and fitting of bidirected graph models to categorical data
 Scand. J. Stat
, 2009
"... ..."
Graphical Answers to . . .
, 2004
"... In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph (alternatively graphs with dashed edges have been used for this purpose). Bidirected graphs are special instances of ancestral graphs, which are mixed ..."
Abstract
 Add to MetaCart
In graphical modelling, a bidirected graph encodes marginal independences among random variables that are identified with the vertices of the graph (alternatively graphs with dashed edges have been used for this purpose). Bidirected graphs are special instances of ancestral graphs, which are mixed graphs with undirected, directed, and bidirected edges. In this paper, we show how simplicial sets and the newly defined orientable edges can be used to construct a maximal ancestral graph that is Markov equivalent to a given bidirected graph, i.e. the independence models associated with the two graphs coincide, and such that the number of arrowheads is minimal. Here the number of arrowheads of an ancestral graph is the number of directed edges plus twice the number of bidirected edges. This construction yields an immediate check whether the original bidirected graph is Markov equivalent to a directed acyclic graph (Bayesian network) or an undirected graph (Markov random field). Moreover, the ancestral graph construction allows for computationally more efficient maximum likelihood fitting of covariance graph models, i.e. Gaussian bidirected graph models. In particular, we give a necessary and sufficient graphical criterion for determining when an entry of the maximum likelihood estimate of the covariance matrix must equal its empirical counterpart.
CHRISTIAN BORGELT AND RUDOLF KRUSE ABDUCTIVE INFERENCE WITH PROBABILISTIC NETWORKS
"... Abduction is a form of nondeductive logical inference. Examples given by [Peirce, 1958], who is said to have coined the term “abduction”, include the following: I once landed at a seaport in a Turkish province; and as I was walking up to the house which I was to visit, I met a man upon horseback, s ..."
Abstract
 Add to MetaCart
Abduction is a form of nondeductive logical inference. Examples given by [Peirce, 1958], who is said to have coined the term “abduction”, include the following: I once landed at a seaport in a Turkish province; and as I was walking up to the house which I was to visit, I met a man upon horseback, surrounded by four
1 Abstract Learning Bayesian Networks from Data: An InformationTheory Based Approach
, 2001
"... This paper provides algorithms that use an informationtheoretic analysis to learn Bayesian network structures from data. Based on our threephase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional indepe ..."
Abstract
 Add to MetaCart
This paper provides algorithms that use an informationtheoretic analysis to learn Bayesian network structures from data. Based on our threephase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.
Triangular systems for symmetric . . .
, 2009
"... We introduce and study distributions of sets of binary variables that are symmetric, that is each has equally probable levels. The joint distribution of these special types of binary variables, if generated by a recursive process of linear main effects is essentially parametrized in terms of margi ..."
Abstract
 Add to MetaCart
We introduce and study distributions of sets of binary variables that are symmetric, that is each has equally probable levels. The joint distribution of these special types of binary variables, if generated by a recursive process of linear main effects is essentially parametrized in terms of marginal correlations. This contrasts with the loglinear formulation of joint probabilities in which parameters measure conditional associations given all remaining variables. The new formulation permits useful comparisons of different types of graphical Markov models and leads to a close approximation of Gaussian orthant probabilities.