Results 1 
9 of
9
W (2011) Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput Biol 7: e1002211
"... The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trialtotrial variability of neural systems in the brain. In principle there ex ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trialtotrial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on nonreversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical
Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons
, 2011
"... An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operati ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (‘‘explaining away’’) and with undirected loops, that occur in many realworld tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trialtotrial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
The Neural Costs of Optimal Control
"... Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a var ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the rewardmodulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. 1
Perception, Action and Utility: The Tangled Skein
, 2011
"... Normative theories of learning and decisionmaking are motivated by a computationallevel analysis of the task facing an animal: what should the animal do to maximize future reward? However, much of the recent excitement in this field originates in how the animal arrives at its decisions and reward ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Normative theories of learning and decisionmaking are motivated by a computationallevel analysis of the task facing an animal: what should the animal do to maximize future reward? However, much of the recent excitement in this field originates in how the animal arrives at its decisions and reward predictions—algorithmic questions about which the computationallevel analysis is silent.
Neuronal Adaptation for SamplingBased Probabilistic Inference in Perceptual Bistability
"... It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to lowlevel mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to lowlevel mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM) model of cortical processing to demonstrate that these two different approaches can be combined in the same framework. Based on recent developments in machine learning, we show how neuronal adaptation can be understood as a mechanism that improves probabilistic, samplingbased inference. Using the ambiguous Necker cube image, we analyze the perceptual switching exhibited by the model. We also examine the influence of spatial attention, and explore how binocular rivalry can be modeled with the same approach. Our work joins earlier studies in demonstrating how the principles underlying DBMs relate to cortical processing, and offers novel perspectives on the neural implementation of approximate probabilistic inference in the brain. 1
13 Perception, Action, and Utility: The Tangled Skein
"... Statistical decision theory seems to offer a clear framework for the integration of perception and action. In particular, it defines the problem of maximizing the utility of one’s decisions in terms of two subtasks: inferring the likely state of the world, and tracking the utility that would result ..."
Abstract
 Add to MetaCart
(Show Context)
Statistical decision theory seems to offer a clear framework for the integration of perception and action. In particular, it defines the problem of maximizing the utility of one’s decisions in terms of two subtasks: inferring the likely state of the world, and tracking the utility that would result from different candidate actions in different states. This computationallevel description underpins more processlevel research in neuroscience about the brain’s dynamic mechanisms for, on the one hand, inferring states and, on the other hand, learning action values. However, a number of different strands of recent work on this more algorithmic level have cast doubt on the basic shape of the decisiontheoretic formulation, specifically the clean separation between states ’ probabilities and utilities. We consider the complex interrelationship between perception, action, and utility implied by these accounts. Normative theories of learning and decision making are motivated by a computationallevel analysis of the task facing an organism: What should
Reviewed by:
, 2011
"... Predictive context influences perceptual selection during ..."
(Show Context)
On the Nature and Origin of Intuitive Theories: Learning, Physics and Psychology
, 2015
"... This thesis develops formal computational models of intuitive theories, in particular intuitive physics and intuitive psychology, which form the basis of commonsense reasoning. The overarching formal framework is that of hierarchical Bayesian models, which see the mind as having domainspecific hypo ..."
Abstract
 Add to MetaCart
(Show Context)
This thesis develops formal computational models of intuitive theories, in particular intuitive physics and intuitive psychology, which form the basis of commonsense reasoning. The overarching formal framework is that of hierarchical Bayesian models, which see the mind as having domainspecific hypotheses about how the world works. The work first extends models of intuitive psychology to include higherlevel social utilities, arguing against a pure ‘classifier ’ view. Second, the work extends models of intuitive physics by introducing a ontological hierarchy of physics concepts, and examining how well people can reason about novel dynamic displays. I then examine the question of learning intuitive theories in general, arguing that an algorithmic approach based on stochastic search can address several puzzles of learning, including the ‘chicken and egg ’ problem of concept learning. Finally, I argue the need for a joint theoryspace for reasoning about intuitive physics and intuitive psychology, and provide such a simplified space in the form of a generative model for a novel domain
Theory Learning as Stochastic Search in a Language of Thought
"... We present an algorithmic model for the development of children’s intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic contextfree grammar. We contrast our approach with connectionist and other emergentist a ..."
Abstract
 Add to MetaCart
(Show Context)
We present an algorithmic model for the development of children’s intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic contextfree grammar. We contrast our approach with connectionist and other emergentist approaches to modeling cognitive development: while their subsymbolic representations provide a smooth error surface that supports efficient gradientbased learning, our symbolic representations are better suited to capturing children’s intuitive theories but give rise to a harder learning problem, which can only be solved by exploratory search. Our algorithm attempts to discover the theory that best explains a set of observed data by performing stochastic search at two levels of abstraction: an outer loop in the space of theories, and an inner loop in the space of explanations or models generated by each theory given a particular dataset. We show that this stochastic search is capable of learning appropriate theories in several everyday domains, and discuss its dynamics in the context of empirical studies of children’s learning.