Results 1  10
of
78
Interpreting neuronal population activity by reconstruction: unified framework with application to hippocampal place cells
 J. Neumphysiol
, 1998
"... such as the orientation of a line in the visual field or the location of Two main goals for reconstruction are approached in this the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which paper. The first goal is technical and ..."
Abstract

Cited by 77 (6 self)
 Add to MetaCart
such as the orientation of a line in the visual field or the location of Two main goals for reconstruction are approached in this the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which paper. The first goal is technical and is exemplified by the the physical variables are estimated from observed neural activity. population vector method applied to motor cortical activities Reconstruction is useful first in quantifying how much information during various reaching tasks (Georgopoulos et al. 1986, 1989; about the physical variables is present in the population and, second, Schwartz 1994) and the template matching method applied to in providing insight into how the brain might use distributed represen disparity selective cells in the visual cortex (Lehky and Sejnowtations in solving related computational problems such as visual ob ski 1990) and hippocampal place cells during rapid learning of ject recognition and spatial navigation. Two classes of reconstruction place fields in a novel environment (Wilson and McNaughton methods, namely, probabilistic or Bayesian methods and basis func 1993). In these examples, reconstruction extracts information tion methods, are discussed. They include important existing methods from noisy neuronal population activity and transforms it to a
Bayesian computation in recurrent neural circuits
 Neural Computation
, 2004
"... A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implem ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model. We illustrate the approach using an orientation discrimination task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the wellknown random dots motion discrimination task, the model generates responses that mimic the activities of evidenceaccumulating neurons in cortical areas LIP and FEF. The framework introduced in the paper posits a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world. 1 1
Statistically Efficient Estimation Using Population Coding
, 1998
"... Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible ..."
Abstract

Cited by 57 (9 self)
 Add to MetaCart
Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a nearoptimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.
Mapbased navigation in mobile robots.  I. A review of localization strategies
, 2003
"... For a robot, an animal, and even for man, to be able to use an internal representation of the spatial layout of its environment to position itself is a very complex task, which raises numerous issues of perception, categorization and motor control that must all be solved in an integrated manner to p ..."
Abstract

Cited by 33 (11 self)
 Add to MetaCart
For a robot, an animal, and even for man, to be able to use an internal representation of the spatial layout of its environment to position itself is a very complex task, which raises numerous issues of perception, categorization and motor control that must all be solved in an integrated manner to promote survival. This point is illustrated here, within the framework of a review of localization strategies in mobile robots. The allothetic and idiothetic sensors that may be used by these robots to build internal representations of their environment, and the maps in which these representations may be instantiated, are first described. Then mapbased navigation systems are categorized according to a 3level hierarchy of localization strategies, which respectively call upon direct position inference, singlehypothesis tracking, and multiplehypothesis tracking. The advantages and drawbacks of these strategies, notably with respect to the limitations of the sensors on which they rely, are discussed throughout the text.
Doubly distributional population codes: Simultaneous represen tation of uncertainty and multiplicity
, 2003
"... Perceptual inference fundamentally involves uncertainty, arising from noise in sensation and the illposed nature of many perceptual problems. Accurate perception requires that this uncertainty be correctly represented, manipulated, and learned about. The choices made by subjects in various psychoph ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
Perceptual inference fundamentally involves uncertainty, arising from noise in sensation and the illposed nature of many perceptual problems. Accurate perception requires that this uncertainty be correctly represented, manipulated, and learned about. The choices made by subjects in various psychophysical experiments suggest that they do indeed take such uncertainty into account when making perceptual inferences, posing the question as to how uncertainty is represented in the activities of neuronal populations. Most theoretical investigations of population coding have ignored this issue altogether; the few existing proposals that address it, do so in such a way that it is fatally conflated with another facet of perceptual problems that also needs correct handling, namely multiplicity (that is, the simultaneous presence of multiple distinct stimuli). We present and validate a more powerful proposal for the way that population activity may encode uncertainty, both distinctly from, and simultaneously with, multiplicity.
Inference, attention, and decision in a Bayesian neural architecture
 Advances in Neural Information Processing Systems 17
, 2005
"... We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an ex ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed, which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The model offers an explicit explanation for the experimentally observed modulation that prior information in one stimulus feature (location) can have on an independent feature (orientation). The network’s intermediate levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation of cortical and neuromodulatory representations of uncertainty. 1
Channel smoothing: Efficient robust smoothing of lowlevel signal features
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2006
"... In this paper, we present a new and efficient method to implement robust smoothing of lowlevel signal features: Bspline channel smoothing. This method consists of three steps: encoding of the signal features into channels, averaging of the channels, and decoding of the channels. We show that line ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
In this paper, we present a new and efficient method to implement robust smoothing of lowlevel signal features: Bspline channel smoothing. This method consists of three steps: encoding of the signal features into channels, averaging of the channels, and decoding of the channels. We show that linear smoothing of channels is equivalent to robust smoothing of the signal features if we make use of quadratic Bsplines to generate the channels. The linear decoding from Bspline channels allows the derivation of a robust error norm, which is very similar to Tukey’s biweight error norm. We compare channel smoothing with three other robust smoothing techniques: nonlinear diffusion, bilateral filtering, and meanshift filtering, both theoretically and on a 2D orientationdata smoothing task. Channel smoothing is found to be superior in four respects: It has a lower computational complexity, it is easy to implement, it chooses the global minimum error instead of the nearest local minimum, and it can also be used on nonlinear spaces, such as orientation space.
Fast Population Coding
 Neural Computation
, 2007
"... Uncertainty coming from the noise in its neurons and the illposed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Uncertainty coming from the noise in its neurons and the illposed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the capabilities of populations of neurons to implement computations in the face of uncertainty. However, one major facet of uncertainty has received comparatively little attention: time. In a dynamic, rapidly changing world, data are only temporarily relevant. Here, we analyze the computational consequences of encoding stimulus trajectories in populations of neurons. For the most obvious, simple, instantaneous encoder, the correlations induced by natural, smooth stimuli engender a decoder that requires access to information that is nonlocal both in time and across neurons. This formally amounts to a ruinous representation. We show that there is an alternative encoder that is computationally and representationally powerful in which each spike contributes independent information; it is independently decodable, in other words. We suggest this as an appropriate foundation for understanding timevarying population codes. Furthermore, we show how adaptation to
Low and Medium Level Vision Using Channel Representations
 Linköping University, Sweden
, 2004
"... Don’t confuse the moon with the finger that points at it. Zen proverb iii iv This thesis introduces and explores a new type of representation for low and medium level vision operations called channel representation. The channel representation is a more general way to represent information than e.g. ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Don’t confuse the moon with the finger that points at it. Zen proverb iii iv This thesis introduces and explores a new type of representation for low and medium level vision operations called channel representation. The channel representation is a more general way to represent information than e.g. as numerical values, since it allows incorporation of uncertainty, and simultaneous representation of several hypotheses. More importantly it also allows the representation of “no information ” when no statement can be given. A channel representation of a scalar value is a vector of channel values, which are generated by passing the original scalar value through a set of kernel functions. The resultant representation is sparse and monopolar. The word sparse signifies that information is not necessarily
Hierarchical Bayesian inference in networks of spiking neurons
 Advances in Neural Information Processing Systems 17
, 2005
"... There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this p ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrateandfire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a directionselective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a twolevel hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1