Results 1  10
of
22
The Art of Causal Conjecture
, 1996
"... Causal relations are regularities in the way Nature’s predictions change. Since we usually do not stand in Nature’s shoes, we usually do not observe these dynamic regularities directly. But we sometimes observe statistical regularities that are most easily explained by hypothesizing such dynamic reg ..."
Abstract

Cited by 85 (18 self)
 Add to MetaCart
Causal relations are regularities in the way Nature’s predictions change. Since we usually do not stand in Nature’s shoes, we usually do not observe these dynamic regularities directly. But we sometimes observe statistical regularities that are most easily explained by hypothesizing such dynamic regularities. In this chapter, I illustrate this process of causal conjecture with a few simple examples. I first consider a negative causal relation: causal uncorrelatedness. Two variables are causally uncorrelated if there are no steps in Nature’s event tree that change them both in expected value. They have, in this sense, no common causes. This implies, as we shall see, that the two variables are uncorrelated in the classical sense in every situation in the tree. When we observe that variables are uncorrelated in many different situations, then we may conjecture that this is due to their being causally uncorrelated. I will also discuss three causal relations of a positive character. These relations assert, each in a different way, that the causes (steps in Nature’s tree) that affect a certain variable X also affect another variable Y. This implies regularities in certain classical statistical predictions. The first causal relation, which I call linear sign, implies regularity in linear regression. The second, scored sign, implies regularity in conditional
ANCESTRAL GRAPH MARKOV MODELS
, 2002
"... This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of verti ..."
Abstract

Cited by 76 (18 self)
 Add to MetaCart
This paper introduces a class of graphical independence models that is closed under marginalization and conditioning but that contains all DAG independence models. This class of graphs, called maximal ancestral graphs, has two attractive features: there is at most one edge between each pair of vertices; every missing edge corresponds to an independence relation. These features lead to a simple parameterization of the corresponding set of distributions in the Gaussian case.
Causal Inference from Graphical Models
, 2001
"... Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling complex stochastic systems. It has become clear that graphical models, in particular those based upon directed acyclic graphs, have natural causal interpretations and thus form a base for a language in which causal concepts can be discussed and analysed in precise terms. As a consequence there has been an explosion of writings, not primarily within mainstream statistical literature, concerned with the exploitation of this language to clarify and extend causal concepts. Among these we mention in particular books by Spirtes, Glymour and Scheines (1993), Shafer (1996), and Pearl (2000) as well as the collection of papers in Glymour and Cooper (1999). Very briefly, but fundamentally,
An Alternative Markov Property for Chain Graphs
 Scand. J. Statist
, 1996
"... Graphical Markov models use graphs, either undirected, directed, or mixed, to represent possible dependences among statistical variables. Applications of undirected graphs (UDGs) include models for spatial dependence and image analysis, while acyclic directed graphs (ADGs), which are especially conv ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
Graphical Markov models use graphs, either undirected, directed, or mixed, to represent possible dependences among statistical variables. Applications of undirected graphs (UDGs) include models for spatial dependence and image analysis, while acyclic directed graphs (ADGs), which are especially convenient for statistical analysis, arise in such fields as genetics and psychometrics and as models for expert systems and Bayesian belief networks. Lauritzen, Wermuth, and Frydenberg (LWF) introduced a Markov property for chain graphs, which are mixed graphs that can be used to represent simultaneously both causal and associative dependencies and which include both UDGs and ADGs as special cases. In this paper an alternative Markov property (AMP) for chain graphs is introduced, which in some ways is a more direct extension of the ADG Markov property than is the LWF property for chain graph. 1 INTRODUCTION Graphical Markov models use graphs, either undirected, directed, or mixed, to represent...
Chain Graph Models and their Causal Interpretations
 B
, 2001
"... Chain graphs are a natural generalization of directed acyclic graphs (DAGs) and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional independence hypotheses that they represent. There are a number of simple and apparently plausible, but ultim ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
Chain graphs are a natural generalization of directed acyclic graphs (DAGs) and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional independence hypotheses that they represent. There are a number of simple and apparently plausible, but ultimately fallacious interpretations of chain graphs that are often invoked, implicitly or explicitly. These interpretations also lead to awed methods for applying background knowledge to model selection. We present a valid interpretation by showing how the distribution corresponding to a chain graph may be generated as the equilibrium distribution of dynamic models with feedback. These dynamic interpretations lead to a simple theory of intervention, extending the theory developed for DAGs. Finally, we contrast chain graph models under this interpretation with simultaneous equation models which have traditionally been used to model feedback in econometrics. Keywords: Causal model; cha...
Graphs, Causality, And Structural Equation Models
, 1998
"... Structural equation modeling (SEM) has dominated causal analysis in the social and behavioral sciences since the 1960s. Currently, many SEM practitioners are having difficulty articulating the causal content of SEM and are seeking foundational answers. ..."
Abstract

Cited by 44 (14 self)
 Add to MetaCart
Structural equation modeling (SEM) has dominated causal analysis in the social and behavioral sciences since the 1960s. Currently, many SEM practitioners are having difficulty articulating the causal content of SEM and are seeking foundational answers.
Using Path Diagrams as a Structural Equation Modelling Tool
, 1997
"... this paper, we will show how path diagrams can be used to solve a number of important problems in structural equation modelling. There are a number of problems associated with structural equation modeling. These problems include: ..."
Abstract

Cited by 29 (7 self)
 Add to MetaCart
this paper, we will show how path diagrams can be used to solve a number of important problems in structural equation modelling. There are a number of problems associated with structural equation modeling. These problems include:
Identifying Independencies in Causal Graphs with Feedback
 In Uncertainty in Artificial Intelligence: Proceedings of the Twelfth Conference
, 1996
"... We show that the dseparation criterion constitutes a valid test for conditional independence relationships that are induced by feedback systems involving discrete variables. 1 INTRODUCTION It is well known that the dseparation test is sound and complete relative to the independencies assumed in t ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We show that the dseparation criterion constitutes a valid test for conditional independence relationships that are induced by feedback systems involving discrete variables. 1 INTRODUCTION It is well known that the dseparation test is sound and complete relative to the independencies assumed in the construction of Bayesian networks [Verma and Pearl, 1988, Geiger et al., 1990]. In other words, any dseparation condition in the network corresponds to a genuine independence condition in the underlying probability distribution and, conversely, every dconnection corresponds to a dependency in at least one distribution compatible with the network. The situation with feedback systems is more complicated, primarily because the probability distributions associated with such systems do not lend themselves to a simple product decomposition. The joint distribution of feedback systems cannot be written as a product of the conditional distributions of each child variable, given its parents. Rath...