Results 1  10
of
15
Perceptual multistability as Markov Chain Monte Carlo inference
 Advances in Neural Information Processing Systems 22
, 2009
"... While many perceptual and cognitive phenomena are well described in terms of Bayesian inference, the necessary computations are intractable at the scale of realworld tasks, and it remains unclear how the human mind approximates Bayesian computations algorithmically. We explore the proposal that for ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
While many perceptual and cognitive phenomena are well described in terms of Bayesian inference, the necessary computations are intractable at the scale of realworld tasks, and it remains unclear how the human mind approximates Bayesian computations algorithmically. We explore the proposal that for some tasks, humans use a form of Markov Chain Monte Carlo to approximate the posterior distribution over hidden variables. As a case study, we show how several phenomena of perceptual multistability can be explained as MCMC inference in simple graphical models for lowlevel vision. 1
The Neural Costs of Optimal Control
"... Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a var ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the rewardmodulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. 1
Is perceptual acuity asymmetric in isolated word recognition? evidence from an idealobserver reverseengineering approach
 In Proceedings of the
, 2010
"... An asymmetrical optimal viewing position (OVP) effect in isolated word recognition has been well documented, such that recognition speed and accuracy are highest when the point of fixation within the word is slightly to the left of center. However, there remains disagreement as to the source of the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An asymmetrical optimal viewing position (OVP) effect in isolated word recognition has been well documented, such that recognition speed and accuracy are highest when the point of fixation within the word is slightly to the left of center. However, there remains disagreement as to the source of the asymmetry in the OVP effect. One leading explanation is that perceptual acuity in isolated word recognition is asymmetric, falling off more rapidly to the left than to the right. An alternative explanation is that of lexical constraint: perceptual acuity may be symmetric, but that the distributional statistics of the lexicon are such that the letters near the beginning of a word are on average of greater value in discriminating word identity than the letters near the end. On both these accounts, a leftofcenter fixation point optimizes the efficient accrual of
Preschoolers sample from probability distributions
"... Researchers in both educational and developmental psychology have suggested that children are not particularly adept hypothesis testers, and that their behavior can often appear irrational. However, a growing body of research also suggests that people do engage in rational inference on a variety of ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Researchers in both educational and developmental psychology have suggested that children are not particularly adept hypothesis testers, and that their behavior can often appear irrational. However, a growing body of research also suggests that people do engage in rational inference on a variety of tasks. Recently researchers have begun testing the idea that reasoners may be sampling hypotheses from an internal probability distribution when making inferences. If children are reasoning in this way, this might help to explain some seemingly irrational behavior seen in previous experiments. Forty 4yearolds were tested on a probabilistic inference task that required them to make repeated guesses about which of two types of blocks had been randomly sampled from a population. Results suggest that children can sample from a probability distribution as evidenced by the fact that, as a group, they engaged in probability matching and that the dependency between successive guesses decreased over time.
Don’t Stop ‘Til You Get Enough: Adaptive Information Sampling in a Visuomotor Estimation Task
"... We investigated how subjects sample information in order to improve performance in a visuomotor estimation task. Subjects were rewarded for touching a hidden circular target based on visual cues to the target’s location. The cues were 'dots ' drawn from a Gaussian distribution centered on the middle ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We investigated how subjects sample information in order to improve performance in a visuomotor estimation task. Subjects were rewarded for touching a hidden circular target based on visual cues to the target’s location. The cues were 'dots ' drawn from a Gaussian distribution centered on the middle of the target. Subjects could sample as many cues as they wished, but the potential reward for hitting the target decreased by a fixed amount for each additional cue requested. The subjects ' objective was to balance the benefits of increased information against the costs incurred in acquiring it. We compared human performance to ideal and found that subjects sampled more cues than dictated by the optimal stopping rule that tries to maximize expected gain. We contrast our results with recent reports in the literature that subjects typically undersample.
How many kinds of reasoning? Inference, probability, and natural language semantics, 2012
 In Proceedings of the 34 th Annual Conference of the Cognitive Science Society
"... & Heit, 2009) has suggested that differences between inductive and deductive reasoning cannot be explained by probabilistic theories, and instead support twoprocess accounts of reasoning. We provide a probabilistic model that predicts the observed nonlinearities and makes quantitative predictions ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
& Heit, 2009) has suggested that differences between inductive and deductive reasoning cannot be explained by probabilistic theories, and instead support twoprocess accounts of reasoning. We provide a probabilistic model that predicts the observed nonlinearities and makes quantitative predictions about responses as a function of argument strength. Predictions were tested using a novel experimental paradigm that elicits the previouslyreported response patterns with a minimal manipulation, changing only one word between conditions. We also found a good fit with quantitative model predictions, indicating that a probabilistic theory of reasoning can account in a clear and parsimonious way for qualitative and quantitative data previously argued to falsify them. We also relate our model to recent work in linguistics, arguing that careful attention to the semantics of language used to pose reasoning problems will sharpen the questions asked in the psychology of reasoning.
Select and Sample — A Model of Efficient Neural Inference and Learning
"... An increasing number of experimental studies indicate that perception encodes a posterior probability distribution over possible causes of sensory stimuli, which is used to act close to optimally in the environment. One outstanding difficulty with this hypothesis is that the exact posterior will in ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
An increasing number of experimental studies indicate that perception encodes a posterior probability distribution over possible causes of sensory stimuli, which is used to act close to optimally in the environment. One outstanding difficulty with this hypothesis is that the exact posterior will in general be too complex to be represented directly, and thus neurons will have to represent an approximation of this distribution. Two influential proposals of efficient posterior representation by neural populations are: 1) neural activity represents samples of the underlying distribution, or 2) they represent a parametric representation of a variational approximation of the posterior. We show that these approaches can be combined for an inference scheme that retains the advantages of both: it is able to represent multiple modes and arbitrary correlations, a feature of sampling methods, and it reduces the represented space to regions of high probability mass, a strength of variational approximations. Neurally, the combined method can be interpreted as a feedforward preselection of the relevant state space, followed by a neural dynamics
Perception, Action and Utility: The Tangled Skein
, 2011
"... Normative theories of learning and decisionmaking are motivated by a computationallevel analysis of the task facing an animal: what should the animal do to maximize future reward? However, much of the recent excitement in this field originates in how the animal arrives at its decisions and reward ..."
Abstract
 Add to MetaCart
Normative theories of learning and decisionmaking are motivated by a computationallevel analysis of the task facing an animal: what should the animal do to maximize future reward? However, much of the recent excitement in this field originates in how the animal arrives at its decisions and reward predictions—algorithmic questions about which the computationallevel analysis is silent.
13 Perception, Action, and Utility: The Tangled Skein
"... Statistical decision theory seems to offer a clear framework for the integration of perception and action. In particular, it defines the problem of maximizing the utility of one’s decisions in terms of two subtasks: inferring the likely state of the world, and tracking the utility that would result ..."
Abstract
 Add to MetaCart
Statistical decision theory seems to offer a clear framework for the integration of perception and action. In particular, it defines the problem of maximizing the utility of one’s decisions in terms of two subtasks: inferring the likely state of the world, and tracking the utility that would result from different candidate actions in different states. This computationallevel description underpins more processlevel research in neuroscience about the brain’s dynamic mechanisms for, on the one hand, inferring states and, on the other hand, learning action values. However, a number of different strands of recent work on this more algorithmic level have cast doubt on the basic shape of the decisiontheoretic formulation, specifically the clean separation between states ’ probabilities and utilities. We consider the complex interrelationship between perception, action, and utility implied by these accounts. Normative theories of learning and decision making are motivated by a computationallevel analysis of the task facing an organism: What should
ARTICLE Communicated by Nando de Freitas Multistability and Perceptual Inference
"... Ambiguous images present a challenge to the visual system: How can uncertainty about the causes of visual inputs be represented when there are multiple equally plausible causes? A Bayesian ideal observer should represent uncertainty in the form of a posterior probability distribution over causes. Ho ..."
Abstract
 Add to MetaCart
Ambiguous images present a challenge to the visual system: How can uncertainty about the causes of visual inputs be represented when there are multiple equally plausible causes? A Bayesian ideal observer should represent uncertainty in the form of a posterior probability distribution over causes. However, in many realworld situations, computing this distribution is intractable and requires some form of approximation. We argue that the visual system approximates the posterior over underlying causes with a set of samples and that this approximation strategy produces perceptual multistability—stochastic alternation between percepts in consciousness. Under our analysis, multistability arises from a dynamic samplegenerating process that explores the posterior through stochastic diffusion, implementing a rational form of approximate Bayesian inference known as Markov chain Monte Carlo (MCMC). We examine in detail the most extensively studied form of multistability, binocular rivalry, showing how a variety of experimental phenomena—gammalike stochastic switching, patchy percepts, fusion, and traveling waves—can be understood in terms of MCMC sampling over simple graphical models of the underlying perceptual tasks. We conjecture that the stochastic nature of spiking neurons may lend itself to implementing samplebased posterior approximations in the brain. 1