Results 1  10
of
326
Probabilistic Horn abduction and Bayesian networks
 Artificial Intelligence
, 1993
"... This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesia ..."
Abstract

Cited by 332 (39 self)
 Add to MetaCart
This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language. Scholar, Canadian Institute for Advanced...
A Theory Of Inferred Causation
, 1991
"... This paper concerns the empirical basis of causation, and addresses the following issues: 1. the clues that might prompt people to perceive causal relationships in uncontrolled observations. 2. the task of inferring causal models from these clues, and 3. whether the models inferred tell us anything ..."
Abstract

Cited by 250 (36 self)
 Add to MetaCart
This paper concerns the empirical basis of causation, and addresses the following issues: 1. the clues that might prompt people to perceive causal relationships in uncontrolled observations. 2. the task of inferring causal models from these clues, and 3. whether the models inferred tell us anything useful about the causal mechanisms that underly the observations. We propose a minimalmodel semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we address the issue of nontemporal causation.
A theory of causal learning in children: Causal maps and Bayes nets
 PSYCHOLOGICAL REVIEW
, 2004
"... The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate “causal map ” of the world: an abstract, coherent, learned representation of the causal relations among events ..."
Abstract

Cited by 229 (45 self)
 Add to MetaCart
(Show Context)
The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate “causal map ” of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed graphical causal models, or Bayes nets. Children’s causal learning and inference may involve computations similar to those for learning causal Bayes nets and for predicting with them. Experimental results suggest that 2to 4yearold children construct new causal maps and that their learning is consistent with the Bayes net formalism.
Predictive and diagnostic learning within causal models: Asymmetries in cue competition
 Journal of Experimental Psychology: General
, 1992
"... Several researchers have recently claimed that higher order types of learning, such as categorization and causal induction, can be reduced to lower order associative learning. These claims are based in part on reports of cue competition in higher order learning, apparently analogous to blocking in c ..."
Abstract

Cited by 91 (15 self)
 Add to MetaCart
(Show Context)
Several researchers have recently claimed that higher order types of learning, such as categorization and causal induction, can be reduced to lower order associative learning. These claims are based in part on reports of cue competition in higher order learning, apparently analogous to blocking in classical conditioning. Three experiments are reported in which subjects had to learn to respond on the basis of cues that were defined either as possible causes of a common effect (predictive learning) or as possible effects of a common cause (diagnostic learning). The results indicate that diagnostic and predictive reasoning, far from being identical as predicted by associationistic models, are not even symmetrical. Although cue competition occurs among multiple possible causes during predictive learning, multiple possible effects need not compete during diagnostic learning. The results favor a causalmodel theory.
2003a). Bayesian Epistemology
"... Bayesian epistemology addresses epistemological problems with the help of the mathematical theory of probability. It turns out that the probability calculus is especially suited to represent degrees of belief (credences) and to deal with questions of belief change, confirmation, evidence, justificat ..."
Abstract

Cited by 75 (12 self)
 Add to MetaCart
(Show Context)
Bayesian epistemology addresses epistemological problems with the help of the mathematical theory of probability. It turns out that the probability calculus is especially suited to represent degrees of belief (credences) and to deal with questions of belief change, confirmation, evidence, justification, and coherence.
The Theoretical Status of Latent Variables
 Psychological Review
, 2003
"... This article examines the theoretical status of latent variables as used in modern test theory models. First, it is argued that a consistent interpretation of such models requires a realist ontology for latent variables. Second, the relation between latent variables and their indicators is discussed ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
This article examines the theoretical status of latent variables as used in modern test theory models. First, it is argued that a consistent interpretation of such models requires a realist ontology for latent variables. Second, the relation between latent variables and their indicators is discussed. It is maintained that this relation can be interpreted as a causal one but that in measurement models for interindividual differences the relation does not apply to the level of the individual person. To substantiate intraindividual causal conclusions, one must explicitly represent individual level processes in the measurement model. Several research strategies that may be useful in this respect are discussed, and a typology of constructs is proposed on the basis of this analysis. The need to link individual processes to latent variable models for interindividual differences is emphasized. Consider the following sentence: “Einstein would not have been able to come up with his e � mc 2 had he not possessed such an extraordinary intelligence. ” What does this sentence express? It relates observable behavior (Einstein’s writing e � mc 2)toan unobservable attribute (his extraordinary intelligence), and it does so by assigning to the unobservable attribute a causal role in
Causal models and the acquisition of category structure
 Journal of Experimental Psychology: General
, 1995
"... This article proposes that learning of categories based on causeeffect relations is guided by causal models. In addition to incorporating domainspecific knowledge, causal models can be based on knowledge of such general structural properties as the direction of the causal arrow and the variability ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
(Show Context)
This article proposes that learning of categories based on causeeffect relations is guided by causal models. In addition to incorporating domainspecific knowledge, causal models can be based on knowledge of such general structural properties as the direction of the causal arrow and the variability of causal variables. Five experiments tested the influence of commoncause models and commoneffect models on the ease of learning linearly separable and nonlinearly separable categories. The results show that causal models guide the interpretation of otherwise identical learning inputs, and that learning difficulty is determined by the fit between the structural implications of the causal models and the structure of the learning domain. These influences of the general properties of causal models were obtained across several different content domains, including domains for which subjects lacked prior knowledge. Tasks as apparently diverse as classical conditioning, category learning, and causal induction often require the learner to combine multiple cues in order to elicit a response. The cues may be conditioned stimuli (in condition
Searching for the causal structure of a vector autoregression
 Oxford Bulletin of Economics and Statistics
, 2003
"... We provide an accessible introduction to graphtheoretic methods for causal analysis. Building on the work of Swanson and Granger (Journal of the American Statistical Association, Vol. 92, pp. 357–367, 1997), and generalizing to a larger class of models, we show how to apply graphtheoretic methods ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
We provide an accessible introduction to graphtheoretic methods for causal analysis. Building on the work of Swanson and Granger (Journal of the American Statistical Association, Vol. 92, pp. 357–367, 1997), and generalizing to a larger class of models, we show how to apply graphtheoretic methods to selecting the causal order for a structural vector autoregression (SVAR). We evaluate the PC (causal search) algorithm in a Monte Carlo study. The PC algorithm uses tests of conditional independence to select among the possible causal orders – or at least to reduce the admissible causal orders to a narrow equivalence class. Our findings suggest that graphtheoretic methods may prove to be a useful tool in the analysis of SVARs. I. The problem of causal order Drawing on recent work on the graphtheoretic analysis of causality, we propose and evaluate a statistical procedure for identifying the contemporaneous causal order of a structural vector autoregression. *We thank Marcus Cuda for his help with programming and computational design, Derek Stimel and Ryan Brady for able research assistance, and to Oscar Jorda, Stephen Perez, and the participants
Causal learning across domains
 Developmental Psychology
, 2004
"... Five studies investigated (a) children’s ability to use the dependent and independent probabilities of events to make causal inferences and (b) the interaction between such inferences and domainspecific knowledge. In Experiment 1, preschoolers used patterns of dependence and independence to make ac ..."
Abstract

Cited by 42 (16 self)
 Add to MetaCart
(Show Context)
Five studies investigated (a) children’s ability to use the dependent and independent probabilities of events to make causal inferences and (b) the interaction between such inferences and domainspecific knowledge. In Experiment 1, preschoolers used patterns of dependence and independence to make accurate causal inferences in the domains of biology and psychology. Experiment 2 replicated the results in the domain of biology with a more complex pattern of conditional dependencies. In Experiment 3, children used evidence about patterns of dependence and independence to craft novel interventions across domains. In Experiments 4 and 5, children’s sensitivity to patterns of dependence was pitted against their domainspecific knowledge. Children used conditional probabilities to make accurate causal inferences even when asked to violate domain boundaries. The past two decades of research have demonstrated that young children understand cause and effect in a wide range of contexts. By the age of 4, children’s folk physics includes knowledge about the causal relationship between object properties and object motion
An Extended Class of Instrumental Variables for the Estimation of Causal Effects
 UCSD DEPT. OF ECONOMICS DISCUSSION PAPER
, 1996
"... This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regresso ..."
Abstract

Cited by 41 (15 self)
 Add to MetaCart
This paper builds on the structural equations, treatment effect, and machine learning literatures to provide a causal framework that permits the identification and estimation of causal effects from observational studies. We begin by providing a causal interpretation for standard exogenous regressors and standard “valid” and “relevant” instrumental variables. We then build on this interpretation to characterize extended instrumental variables (EIV) methods, that is methods that make use of variables that need not be valid instruments in the standard sense, but that are nevertheless instrumental in the recovery of causal effects of interest. After examining special cases of single and double EIV methods, we provide necessary and sufficient conditions for the identification of causal effects by means of EIV and provide consistent and asymptotically normal estimators for the effects of interest.