Results 1 
6 of
6
Theory Acquisition as Stochastic Search
 In Proceedings of
, 2010
"... We present an algorithmic model for the development of children’s intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic contextfree grammar. Our algorithm performs stochastic search at two levels of abstraction ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We present an algorithmic model for the development of children’s intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic contextfree grammar. Our algorithm performs stochastic search at two levels of abstraction – an outer loop in the space of theories, and an inner loop in the space of explanations or models generated by each theory given a particular dataset – in order to discover the theory that best explains the observed data. We show that this model is capable of learning correct theories in several everyday domains, and discuss the dynamics of learning in the context of children’s cognitive development.
Ping pong in Church: Productive use of concepts in human probabilistic inference
 In Proceedings of the 34th
, 2012
"... How do people make inferences from complex patterns of evidence across diverse situations? What does a computational model need in order to capture the abstract knowledge people use for everyday reasoning? In this paper, we explore a novel modeling framework based on the probabilistic language of th ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
How do people make inferences from complex patterns of evidence across diverse situations? What does a computational model need in order to capture the abstract knowledge people use for everyday reasoning? In this paper, we explore a novel modeling framework based on the probabilistic language of thought (PLoT) hypothesis, which conceptualizes thinking in terms of probabilistic inference over compositionally structured representations. The core assumptions of the PLoT hypothesis are realized in the probabilistic programming language Church (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008). Using “ping pong tournaments ” as a case study, we show how a single Church program concisely represents the concepts required to specify inferences from diverse patterns of evidence. In two experiments, we demonstrate a very close fit between our model’s predictions and participants’ judgments. Our model accurately predicts how people reason with confounded and indirect evidence and how different sources of information are integrated.
Bayesian Policy Search with Policy Priors
"... We consider the problem of learning to act in partially observable, continuousstateandaction worlds where we have abstract prior knowledge about the structure of the optimal policy in the form of a distribution over policies. Using ideas from planningasinference reductions and Bayesian unsuperv ..."
Abstract
 Add to MetaCart
We consider the problem of learning to act in partially observable, continuousstateandaction worlds where we have abstract prior knowledge about the structure of the optimal policy in the form of a distribution over policies. Using ideas from planningasinference reductions and Bayesian unsupervised learning, we cast Markov Chain Monte Carlo as a stochastic, hillclimbing policy search algorithm. Importantly, this algorithm’s search bias is directly tied to the prior and its MCMC proposal kernels, which means we can draw on the full Bayesian toolbox to express the search bias, including nonparametric priors and structured, recursive processes like grammars over action sequences. Furthermore, we can reason about uncertainty in the search bias itself by constructing a hierarchical prior and reasoning about latent variables that determine the abstract structure of the policy. This yields an adaptive search algorithm—our algorithm learns to learn a structured policy efficiently. We show how inference over the latent variables in these policy priors enables intra and intertask transfer of abstract knowledge. We demonstrate the flexibility of this approach by learning meta search biases, by constructing a nonparametric finite state controller to model memory, by discovering motor primitives using a simple grammar over primitive actions, and by combining all three. 1
Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence Bayesian Policy Search with Policy Priors
"... We consider the problem of learning to act in partially observable, continuousstateandaction worlds where we have abstract prior knowledge about the structure of the optimal policy in the form of a distribution over policies. Using ideas from planningasinference reductions and Bayesian unsuperv ..."
Abstract
 Add to MetaCart
We consider the problem of learning to act in partially observable, continuousstateandaction worlds where we have abstract prior knowledge about the structure of the optimal policy in the form of a distribution over policies. Using ideas from planningasinference reductions and Bayesian unsupervised learning, we cast Markov Chain Monte Carlo as a stochastic, hillclimbing policy search algorithm. Importantly, this algorithm’s search bias is directly tied to the prior and its MCMC proposal kernels, which means we can draw on the full Bayesian toolbox to express the search bias, including nonparametric priors and structured, recursive processes like grammars over action sequences. Furthermore, we can reason about uncertainty in the search bias itself by constructing a hierarchical prior and reasoning about latent variables that determine the abstract structure of the policy. This yields an adaptive search algorithm—our algorithm learns to learn a structured policy efficiently. We show how inference over the latent variables in these policy priors enables intra and intertask transfer of abstract knowledge. We demonstrate the flexibility of this approach by learning meta search biases, by constructing a nonparametric finite state controller to model memory, by discovering motor primitives using a simple grammar over primitive actions, and by combining all three. 1
Tuning Your Priors to the World
, 2012
"... The idea that perceptual and cognitive systems must incorporate knowledge about the structure of the environment has become a central dogma of cognitive theory. In a Bayesian context, this idea is often realized in terms of “tuning the prior”—widely assumed to mean adjusting prior probabilities so t ..."
Abstract
 Add to MetaCart
The idea that perceptual and cognitive systems must incorporate knowledge about the structure of the environment has become a central dogma of cognitive theory. In a Bayesian context, this idea is often realized in terms of “tuning the prior”—widely assumed to mean adjusting prior probabilities so that they match the frequencies of events in the world. This kind of “ecological” tuning has often been held up as an ideal of inference, in fact defining an “ideal observer. ” But widespread as this viewpoint is, it directly contradicts Bayesian philosophy of probability, which views probabilities as degrees of belief rather than relative frequencies, and explicitly denies that they are objective characteristics of the world. Moreover, tuning the prior to observed environmental frequencies is subject to overfitting, meaning in this context overtuning to the environment, which leads (ironically) to poor performance in future encounters with the same environment. Whenever there is uncertainty about the environment—which there almost always is—an agent’s prior should be biased away from ecological relative frequencies and toward simpler and more entropic priors.
unknown title
"... Bootstrapping in a language of thought: a formal model of conceptual change in number word learning ..."
Abstract
 Add to MetaCart
Bootstrapping in a language of thought: a formal model of conceptual change in number word learning