Results 11  20
of
63
Using Physical Theories to Infer Hidden Causal Structure
 In Proceedings of the 26th
, 2004
"... We argue that human judgments about hidden causal structure can be explained as the operation of domaingeneral statistical inference over causal models constructed using domain knowledge. We present Bayesian models of causal induction in two previous experiments and a new study. Hypothetical ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
We argue that human judgments about hidden causal structure can be explained as the operation of domaingeneral statistical inference over causal models constructed using domain knowledge. We present Bayesian models of causal induction in two previous experiments and a new study. Hypothetical causal models are generated by theories expressing two essential aspects of abstract knowledge about causal mechanisms: which causal relations are plausible, and what functional form they take.
An algebra of human concept learning
 Journal of Mathematical Psychology
, 2006
"... An important element of learning from examples is the extraction of patterns and regularities from data. This paper investigates the structure of patterns in data defined over discrete features, i.e. features with two or more qualitatively distinct values. Any such pattern can be algebraically decom ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
An important element of learning from examples is the extraction of patterns and regularities from data. This paper investigates the structure of patterns in data defined over discrete features, i.e. features with two or more qualitatively distinct values. Any such pattern can be algebraically decomposed into a spectrum of component patterns, each of which is a simpler or more atomic ‘‘regularity.’ ’ Each component regularity involves a certain number of features, referred to as its degree. Regularities of lower degree represent simpler or more coarse patterns in the original pattern, while regularities of higher degree represent finer or more idiosyncratic patterns. The full spectral breakdown of a pattern into component regularities of minimal degree, referred to as its power series, expresses the original pattern in terms of the regular rules or patterns it obeys, amounting to a kind of ‘‘theory’ ’ of the pattern. The number of regularities at various degrees necessary to represent the pattern is tabulated in its power spectrum, which expresses how much of a pattern’s structure can be explained by regularities of various levels of complexity. A weighted mean of the pattern’s spectral power gives a useful numeric summary of its overall complexity, called its algebraic complexity. The basic theory of algebraic decomposition is extended in several ways, including algebraic accounts of the typicality of individual objects within concepts, and estimation of the power series from noisy data. Finally some relations between these algebraic quantities and empirical data are discussed.
Two proposals for causal grammar
 In A. Gopnik & L. Schulz (Eds.), Causal learning: Psychology, philosophy, and computation
, 2007
"... In the previous chapter (Tenenbaum, Griffiths, & Niyogi, this volume), we introduced a framework for thinking about the structure, function, and acquisition of intuitive theories inspired by an analogy to the research program of generative grammar in linguistics. We argued that a principal function ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
In the previous chapter (Tenenbaum, Griffiths, & Niyogi, this volume), we introduced a framework for thinking about the structure, function, and acquisition of intuitive theories inspired by an analogy to the research program of generative grammar in linguistics. We argued that a principal function for intuitive theories, just as for grammars for natural
Learning causal schemata
 In Proceedings of the 29th Annual Conference of the Cognitive Science Society (pp. 389–394). Austin, TX: Cognitive Science Society
"... Causal inferences about sparsely observed objects are often supported by causal schemata, or systems of abstract causal knowledge. We present a hierarchical Bayesian framework that discovers simple causal schemata given only raw data as input. Given a set of objects and observations of causal events ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Causal inferences about sparsely observed objects are often supported by causal schemata, or systems of abstract causal knowledge. We present a hierarchical Bayesian framework that discovers simple causal schemata given only raw data as input. Given a set of objects and observations of causal events involving some of these objects, our framework simultaneously discovers the causal type of each object, the causal powers of these types, the characteristic features of these types, and the nature of the interactions between these types. Several behavioral studies confirm that humans are able to discover causal schemata, and we show that our framework accounts for data collected by Lien and Cheng and Shanks and Darby.
Intuitive theories as grammars for causal inference
 In A. Gopnik & L. Schulz (Eds.), Causal learning: Psychology, philosophy, and computation
, 2007
"... This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recogniz ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing
Evaluating the causal role of unobserved variables
 In R. Alterman & D. Kirsh (Eds.), Proceedings of the 25th annual conference of the Cognitive Science Society (pp. 734 – 739). Mahwah, NJ: Lawrence Earlbaum Associates
, 2003
"... Current psychological models of causal induction assume that causal relationships are inferred based on observations about whether the cause and effect are present or absent. The current study investigated how people infer the causal roles of unobserved events. In Experiment 1 we demonstrate that pa ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Current psychological models of causal induction assume that causal relationships are inferred based on observations about whether the cause and effect are present or absent. The current study investigated how people infer the causal roles of unobserved events. In Experiment 1 we demonstrate that participants are indeed willing to evaluate the causal roles of unobserved events. We then suggest that the basis for these judgments may be situations in which effects occur in the absence of observed causes. Experiment 2 provides evidence that such information does influence participants ’ judgments about unobserved causes.
Elder Care via Intention Recognition and Evolution Prospection
 Procs. 18th Intl. Conf. on Applications of Declarative Programming and Knowledge Management (INAP’09
, 2009
"... Abstract. We explore and exemplify the application in the Elder Care context of the ability to perform Intention Recognition and of wielding Evolution Prospection methods. This is achieved by means of an articulate use of Causal Bayes Nets (for heuristically gauging probable general intentions), com ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
Abstract. We explore and exemplify the application in the Elder Care context of the ability to perform Intention Recognition and of wielding Evolution Prospection methods. This is achieved by means of an articulate use of Causal Bayes Nets (for heuristically gauging probable general intentions), combined with specific generation of plans involving preferences (for checking which such intentions are plausibly being carried out in the specific situation at hand). The overall approach is formulated within one coherent and general logic programming framework and implemented system. The paper recaps required background and illustrates the approach via an extended application example.
The Role of Causality in Judgment Under Uncertainty
"... Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explai ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explain the success and flexibility of people's realworld judgments, and propose an alternative normative framework based on Bayesian inferences over causal models. Deviations from traditional norms of judgment, such as "baserate neglect", may then be explained in terms of a mismatch between the statistics given to people and the causal models they intuitively construct to support probabilistic reasoning. Four experiments show that when a clear mapping can be established from given statistics to the parameters of an intuitive causal model, people are more likely to use the statistics appropriately, and that when the classical and causal Bayesian norms differ in their prescriptions, people's judgments are more consistent with causal Bayesian norms.
Finding Optimal Bayesian Network Given a SuperStructure
"... Classical approaches used to learn Bayesian network structure from data have disadvantages in terms of complexity and lower accuracy of their results. However, a recent empirical study has shown that a hybrid algorithm improves sensitively accuracy and speed: it learns a skeleton with an independenc ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Classical approaches used to learn Bayesian network structure from data have disadvantages in terms of complexity and lower accuracy of their results. However, a recent empirical study has shown that a hybrid algorithm improves sensitively accuracy and speed: it learns a skeleton with an independency test (IT) approach and constrains on the directed acyclic graphs (DAG) considered during the searchandscore phase. Subsequently, we theorize the structural constraint by introducing the concept of superstructure S, which is an undirected graph that restricts the search to networks whose skeleton is a subgraph of S. We develop a superstructure constrained optimal search (COS): its time complexity is upper bounded by O(γm n), where γm < 2 depends on the maximal degree m of S. Empirically, complexity depends on the average degree ˜m and sparse structures allow larger graphs to be calculated. Our algorithm is faster than an optimal search by several orders and even finds more accurate results when given a sound superstructure. Practically, S can be approximated by IT approaches; significance level of the tests controls its sparseness, enabling to control the tradeoff between speed and accuracy. For incomplete superstructures, a greedily postprocessed version (COS+) still enables to significantly outperform other heuristic searches. Keywords: subset Bayesian networks, structure learning, optimal search, superstructure, connected 1.