Results 1 
6 of
6
W (2011) Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput Biol 7: e1002211
"... The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trialtotrial variability of neural systems in the brain. In principle there ex ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trialtotrial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on nonreversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical
The Neural Costs of Optimal Control
"... Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a var ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the rewardmodulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. 1
Perception, Action and Utility: The Tangled Skein
, 2011
"... Normative theories of learning and decisionmaking are motivated by a computationallevel analysis of the task facing an animal: what should the animal do to maximize future reward? However, much of the recent excitement in this field originates in how the animal arrives at its decisions and reward ..."
Abstract
 Add to MetaCart
Normative theories of learning and decisionmaking are motivated by a computationallevel analysis of the task facing an animal: what should the animal do to maximize future reward? However, much of the recent excitement in this field originates in how the animal arrives at its decisions and reward predictions—algorithmic questions about which the computationallevel analysis is silent.
13 Perception, Action, and Utility: The Tangled Skein
"... Statistical decision theory seems to offer a clear framework for the integration of perception and action. In particular, it defines the problem of maximizing the utility of one’s decisions in terms of two subtasks: inferring the likely state of the world, and tracking the utility that would result ..."
Abstract
 Add to MetaCart
Statistical decision theory seems to offer a clear framework for the integration of perception and action. In particular, it defines the problem of maximizing the utility of one’s decisions in terms of two subtasks: inferring the likely state of the world, and tracking the utility that would result from different candidate actions in different states. This computationallevel description underpins more processlevel research in neuroscience about the brain’s dynamic mechanisms for, on the one hand, inferring states and, on the other hand, learning action values. However, a number of different strands of recent work on this more algorithmic level have cast doubt on the basic shape of the decisiontheoretic formulation, specifically the clean separation between states ’ probabilities and utilities. We consider the complex interrelationship between perception, action, and utility implied by these accounts. Normative theories of learning and decision making are motivated by a computationallevel analysis of the task facing an organism: What should
Neuronal Adaptation for SamplingBased Probabilistic Inference in Perceptual Bistability
"... It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to lowlevel mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM ..."
Abstract
 Add to MetaCart
It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to lowlevel mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM) model of cortical processing to demonstrate that these two different approaches can be combined in the same framework. Based on recent developments in machine learning, we show how neuronal adaptation can be understood as a mechanism that improves probabilistic, samplingbased inference. Using the ambiguous Necker cube image, we analyze the perceptual switching exhibited by the model. We also examine the influence of spatial attention, and explore how binocular rivalry can be modeled with the same approach. Our work joins earlier studies in demonstrating how the principles underlying DBMs relate to cortical processing, and offers novel perspectives on the neural implementation of approximate probabilistic inference in the brain. 1