## Deconfounding Hypothesis Generation and Evaluation in Bayesian Models

### BibTeX

@MISC{(liz_deconfoundinghypothesis,

author = {Elizabeth Baraff Bonawitz (liz and Thomas L. Griffiths (tom},

title = {Deconfounding Hypothesis Generation and Evaluation in Bayesian Models},

year = {}

}

### OpenURL

### Abstract

Bayesian models of cognition are typically used to describe human learning and inference at the computational level, identifying which hypotheses people should select to explain observed data given a particular set of inductive biases. However, such an analysis can be consistent with human behavior even if people are not actually carrying out exact Bayesian inference. We analyze a simple algorithm by which people might be approximating Bayesian inference, in which a limited set of hypotheses are generated and then evaluated using Bayes ’ rule. Our mathematical results indicate that a purely computationallevel analysis of learners using this algorithm would confound the distinct processes of hypothesis generation and hypothesis evaluation. We use a causal learning experiment to establish empirically that the processes of generation and evaluation can be distinguished in human learners, demonstrating the importance of recognizing this distinction when interpreting Bayesian models.