Results 1 
5 of
5
Foundations for Bayesian networks
, 2001
"... Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probabi ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probability given to the probabilities specified in the network. In this chapter I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches. One standard approach is to interpret a Bayesian network objectively: the graph in a Bayesian network represents causality in the world and the specified probabilities are objective, empirical probabilities. Such an interpretation founders when the Bayesian network independence assumption (often called the causal Markov condition) fails to hold. In §2 I catalogue the occasions when the independence assumption fails, and show that such failures are pervasive. Next, in §3, I show that even where the independence assumption does hold objectively, an agent’s causal knowledge is unlikely to satisfy the assumption with respect to her subjective probabilities, and that slight differences between an agent’s subjective Bayesian network and an objective Bayesian network can lead to large differences between probability distributions determined by these networks. To overcome these difficulties I put forward logical Bayesian foundations in §5. I show that if the graph and probability specification in a Bayesian network are thought of as an agent’s background knowledge, then the agent is most rational if she adopts the probability distribution determined by the
Causal Inference and Reasoning in Causally Insufficient Systems
, 2006
"... The big question that motivates this dissertation is the following: under what conditions and to what extent can passive observations inform us of the structure of causal connections among a set of variables and of the potential outcome of an active intervention on some of the variables? The partic ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
The big question that motivates this dissertation is the following: under what conditions and to what extent can passive observations inform us of the structure of causal connections among a set of variables and of the potential outcome of an active intervention on some of the variables? The particular concern here revolves around the common kind of situations where the variables of interest, though measurable themselves, may suffer from confounding due to unobserved common causes. Relying on a graphical representation of causally insufficient systems called maximal ancestral graphs, and two wellknown principles widely discussed in the literature, the causal Markov and Faithfulness conditions, we show that the FCI algorithm, a sound inference procedure in the literature for inferring features of the unknown causal structure from facts of probabilistic independence and dependence, is, with some extra sound inference rules, also complete in the sense that any feature of the causal structure left undecided by the inference procedure is indeed underdetermined by facts of probabilistic independence and dependence. In addition, we consider the issue of quantitative reasoning about effects of local interventions with the FCIlearnable features of the unknown causal structure. We improve and generalize two important pieces of work in the literature about identifying intervention effects. We also provide some preliminary study of the testability of the
Finding Event, Temporal and Causal Structure in Text: A Machine Learning Approach
, 2007
"... ..."
(Show Context)
Causation, Prediction . . .
, 1997
"... Causal inference is commonly viewed in two steps: (1) Represent the empirical data in terms of a probability distribution. (2) Draw causal conclusions from the conditional independencies exhibited in that distribution. I challenge this reconstruction by arguing that the empirical data are often bett ..."
Abstract
 Add to MetaCart
Causal inference is commonly viewed in two steps: (1) Represent the empirical data in terms of a probability distribution. (2) Draw causal conclusions from the conditional independencies exhibited in that distribution. I challenge this reconstruction by arguing that the empirical data are often better partitioned into different domains and represented by a separate probability distribution within each domain. For then their similarities and the differences provide a wealth of relevant causal information. Computer simulations confirm this hunch, and the results are explained in terms of a distinction between prediction and accommodation, and William Whewell’s consilience of inductions. If the diagnosis is correct, then the standard notion of the empirical distinguishability, or equivalence, of causal models needs revision, and the idea that cause can be defined in terms of probability is far more plausible than before.