Results 1  10
of
11
Rational approximations to rational models: Alternative algorithms for category learning
"... Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible fo ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of “rational process models” that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson’s (1990, 1991) Rational Model of Categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose two alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure
Teaching games: Statistical sampling assumptions for learning in pedagogical situations
 In Proceedings of the
, 2008
"... Much of learning and reasoning occurs in pedagogical situations – situations in which teachers choose examples with the goal of having a learner infer the concept the teacher has in mind. In this paper, we present a model of teaching and learning in pedagogical settings which predicts what examples ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Much of learning and reasoning occurs in pedagogical situations – situations in which teachers choose examples with the goal of having a learner infer the concept the teacher has in mind. In this paper, we present a model of teaching and learning in pedagogical settings which predicts what examples teachers should choose and what learners should infer given a teachers ’ examples. We present two experiments using a novel experimental paradigm called the rectangle game. The first experiment compares people’s inferences to qualitative model predictions. The second experiment tests people in a situation where pedagogical sampling is not appropriate, ruling out alternative explanations, and suggesting that people use contextappropriate sampling assumptions. We conclude by discussing connections to broader work in inductive reasoning and cognitive
Randomness and Coincidences: Reconciling Intuition and Probability Theory
, 2001
"... We argue that the apparent inconsistency between people's intuitions about chance and the normative predictions of probability theory, as expressed in judgments about randomness and coincidences, can be resolved by focussing on the evidence observations provide about the processes that generated the ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We argue that the apparent inconsistency between people's intuitions about chance and the normative predictions of probability theory, as expressed in judgments about randomness and coincidences, can be resolved by focussing on the evidence observations provide about the processes that generated them, rather than their likelihood. This argument is supported by probabilistic modeling of sequence and number production, together with two experiments that examine people's judgments about coincidences.
Representing Stimulus Similarity
, 2002
"... v Declaration .................................... ix Acknowledgements................................ xi 1Prelude 1 TheVeryIdeaofRepresentation......................... 2 TypesofSimilarity ................................ 8 IsSimilarityIndeterminate? ........................... 11 TheRoleofS ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
v Declaration .................................... ix Acknowledgements................................ xi 1Prelude 1 TheVeryIdeaofRepresentation......................... 2 TypesofSimilarity ................................ 8 IsSimilarityIndeterminate? ........................... 11 TheRoleofSimilarityinCognition....................... 11 Summary&GeneralDiscussion......................... 14 2 Theories of Similarity 17 SimilarityDataSets................................ 17 SpatialRepresentation .............................. 21 FeaturalRepresentation.............................. 31 TreeRepresentation................................ 40 NetworkRepresentation ............................. 47 AlignmentBasedSimilarityModels....................... 48 TransformationalSimilarityModels ....................... 50 Summary&GeneralDiscussion......................... 54 i 3 On Representational Complexity 55 ApproachestoModelSelection ......................... 57 ChoosinganAdditiveClusteringRepresentation ................ 67 ChoosinganAdditiveTreeRepresentation ................... 82 ChoosingaSpatialRepresentation........................ 94 Summary&GeneralDiscussion......................... 95 4 Featural Representation 97 AMenagerieofFeaturalModels......................... 98 ClusteringModels.................................104 GeometricComplexityCriteria..........................106 AlgorithmsforFittingFeaturalModels .....................107 MonteCarloStudyI:DotheAlgorithmsWork? ................109 RepresentationsofKinshipTerms ........................117 MonteCarloStudyII:Complexity........................122 ExperimentI:Faces................................125 ExperimentII:Countries .............................1...
Testing a Bayesian Measure of Representativeness Using a Large Image Database
"... How do people determine which elements of a set are most representative of that set? We extend an existing Bayesian measure of representativeness, which indicates the representativeness of a sample from a distribution, to define a measure of the representativeness of an item to a set. We show that t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
How do people determine which elements of a set are most representative of that set? We extend an existing Bayesian measure of representativeness, which indicates the representativeness of a sample from a distribution, to define a measure of the representativeness of an item to a set. We show that this measure is formally related to a machine learning method known as Bayesian Sets. Building on this connection, we derive an analytic expression for the representativeness of objects described by a sparse vector of binary features. We then apply this measure to a large database of images, using it to determine which images are the most representative members of different sets. Comparing the resulting predictions to human judgments of representativeness provides a test of this measure with naturalistic stimuli, and illustrates how databases that are more commonly used in computer vision and machine learning can be used to evaluate psychological theories. 1
Burnin, bias, and the rationality of anchoring
"... Bayesian inference provides a unifying framework for learning, reasoning, and decision making. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wi ..."
Abstract
 Add to MetaCart
Bayesian inference provides a unifying framework for learning, reasoning, and decision making. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of timeaccuracy tradeoffs, but what is the optimal tradeoff? We investigate timeaccuracy tradeoffs using the MetropolisHastings algorithm as a metaphor for the mind’s inference algorithm(s). We characterize the optimal timeaccuracy tradeoff mathematically in terms of the number of iterations and the resulting bias as functions of time cost, error cost, and the difficulty of the inference problem. We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as “burnin”. Therefore the strategy that is optimal subject to the mind’s bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoringandadjustment heuristic. The model’s quantitative predictions match published data on anchoring in numerical estimation tasks. In conclusion, resourcerationality–the optimal use of finite computational resources–naturally leads to a biased mind. 1
Formal and Empirical Methods in Philosophy of Science
"... Abstract: This essay addresses the methodology of philosophy of science and illustrates how formal and empirical methods can be fruitfully combined. Special emphasis is given to the application of experimental methods to confirmation theory and to recent work on the conjunction fallacy, a key topic ..."
Abstract
 Add to MetaCart
Abstract: This essay addresses the methodology of philosophy of science and illustrates how formal and empirical methods can be fruitfully combined. Special emphasis is given to the application of experimental methods to confirmation theory and to recent work on the conjunction fallacy, a key topic in the rationality debate arising from research in cognitive psychology. Several other issue can be studied in this way. In the concluding section, a brief outline is provided of three further examples. 1.
Presuppositions, provisos, and probability ∗
"... Abstract Theories of presupposition in the tradition associated with Karttunen, Stalnaker and Heim relate presupposition satisfaction to the content of conversational participants ’ epistemic states, usually modeled as sets of worlds. However, converging evidence from recent work on modality and fro ..."
Abstract
 Add to MetaCart
Abstract Theories of presupposition in the tradition associated with Karttunen, Stalnaker and Heim relate presupposition satisfaction to the content of conversational participants ’ epistemic states, usually modeled as sets of worlds. However, converging evidence from recent work on modality and from other areas of cognitive science suggests that epistemic states are better thought of as having the richer structure of probability distributions. I describe an account of semantic and pragmatic presupposition which combines core ideas from dynamic semantic treatments with a probabilistic model of information states and their dynamics in conversation, and argue that it predicts the core data of the proviso problem (Geurts 1996) without invoking ad hoc mechanisms as conditional strengthening accounts typically do. The frequently cited intuition that (ir)relevance is crucial follows without stipulation, and I present new cases which suggest that irrelevance is too weak to predict all cases of unconditional presuppositions, problematizing strengthening accounts which rely on it. The proposed theory is able to account for this new data and also for semiconditional presuppositions, a sticking point for previous theories of presupposition projection. I argue that this perspective also gives us a reasonable line on several related issues, including the divergence between presupposed conditionals and conditional presuppositions, instances of the proviso problem in counterfactuals, and the contextual variation in the difficulty of accommodation.
(www.interscience.wiley.com) DOI: 10.1002/bdm.596 Beliefs About What Types of Mechanisms Produce Random Sequences
, 2008
"... Although many researchers use Wagenaar’s framework for understanding the factors that people use to determine whether a process is random, the framework has never undergone empirical scrutiny. This paper uses Wagenaar’s framework as a starting point and examines the three properties of his framework ..."
Abstract
 Add to MetaCart
Although many researchers use Wagenaar’s framework for understanding the factors that people use to determine whether a process is random, the framework has never undergone empirical scrutiny. This paper uses Wagenaar’s framework as a starting point and examines the three properties of his framework—independence of events, fixed alternatives, and equiprobability. We find strong evidence to suggest that independence of events is indeed used as a cue toward randomness. Equiprobability has an effect on randomness judgments. However, it appears to work only in a limited role. Fixedness of alternatives is a complex construct that consists of multiple subconcepts. We find that each of these subconcepts influences randomness judgments, but that they exert forces in different directions. Stability of outcome ratios increases randomness judgments, while knowledge of outcome ratios decreases randomness judgments. Future directions for development of a functional framework for understanding perceptions of randomness are suggested. Copyright # 2008 John Wiley & Sons, Ltd. key words randomness; equiprobability; fixed alternatives; independence of events
Machine Teaching for Bayesian Learners in the Exponential Family
"... What if there is a teacher who knows the learning goal and wants to design good training data for a machine learner? We propose an optimal teaching framework aimed at learners who employ Bayesian models. Our framework is expressed as an optimization problem over teaching examples that balance the fu ..."
Abstract
 Add to MetaCart
What if there is a teacher who knows the learning goal and wants to design good training data for a machine learner? We propose an optimal teaching framework aimed at learners who employ Bayesian models. Our framework is expressed as an optimization problem over teaching examples that balance the future loss of the learner and the effort of the teacher. This optimization problem is in general hard. In the case where the learner employs conjugate exponential family models, we present an approximate algorithm for finding the optimal teaching set. Our algorithm optimizes the aggregate sufficient statistics, then unpacks them into actual teaching examples. We give several examples to illustrate our framework. 1