Results 11  20
of
92
Neural coding and decoding: communication channels and quantization
 Network: Computation in Neural Systems
, 2001
"... We present a novel analytical approach for studying neural encoding. As a
first step we model a neural sensory system as a communication channel.
Using the method of typical sequence in this context, we show that a
coding scheme is an almost bijective relation between equivalence classes of
stimulus ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
We present a novel analytical approach for studying neural encoding. As a
first step we model a neural sensory system as a communication channel.
Using the method of typical sequence in this context, we show that a
coding scheme is an almost bijective relation between equivalence classes of
stimulus/response pairs. The analysis allows a quantitative determination of the
type of information encoded in neural activity patterns and, at the same time,
identification of the code with which that information is represented. Due to the
high dimensionality of the sets involved, such a relation is extremely difficult
to quantify. To circumvent this problem, and to use whatever limited data set is
available most efficiently, we use another technique from information theory—
quantization. We quantize the neural responses to a reproduction set of small
finite size. Amongmany possible quantizations, we choose one which preserves
as much of the informativeness of the original stimulus/response relation as
possible, through the use of an informationbased distortion function. This
method allows us to study coarse but highly informative approximations of a
coding scheme model, and then to refine them automatically when more data
become available.
Synergy, Redundancy, and Independence in Population Codes
 The Journal of Neuroscience
, 2003
"... A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We distinguish three kinds: (1) activity independence; (2) conditional independence; and (3) information independence. Each notion is related to an information measure: the information between cells, the information between cells given the stimulus, and the synergy of cells about the stimulus, respectively. We show that these measures form an interrelated framework for evaluating contributions of signal and noise correlations to the joint information conveyed about the stimulus and that at least two of the three measures must be calculated to characterize a population code. This framework is compared with others recently proposed in the literature. In addition, we distinguish questions about how information is encoded by a population of neurons from how that information can be decoded. Although information theory is natural and powerful for questions of encoding, it is not sufficient for characterizing the process of decoding. Decoding fundamentally requires an error measure that quantifies the importance of the deviations of estimated stimuli from actual stimuli. Because there is no a priori choice of error measure, questions about decoding cannot be put on the same level of generality as for encoding.
Decoding Neuronal Firing And Modeling Neural Networks
 Quart. Rev. Biophys
, 1994
"... Introduction Biological neural networks are large systems of complex elements interacting through a complex array of connections. Individual neurons express a large number of active conductances (Connors et al., 1982; Adams & Gavin, 1986; Llin'as, 1988; McCormick, 1990; Hille, 1992) and exhibit a w ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Introduction Biological neural networks are large systems of complex elements interacting through a complex array of connections. Individual neurons express a large number of active conductances (Connors et al., 1982; Adams & Gavin, 1986; Llin'as, 1988; McCormick, 1990; Hille, 1992) and exhibit a wide variety of dynamic behaviors on time scales ranging from milliseconds to many minutes (Llin'as, 1988; HarrisWarrick & Marder, 1991; Churchland & Sejnowski, 1992; Turrigiano et al., 1994). Neurons in cortical circuits are typically coupled to thousands of other neurons (Stevens, 1989) and very little is known about the strengths of these synapses (although see Rosenmund et al., 1993; Hessler et al., 1993; Smetters & Nelson, 1993). The complex firing patterns of large neuronal populations are difficult to describe let alone understand. There is little point in accurately modeling each membrane potential in a large neural
MultiDimensional Encoding Strategy of Spiking Neurons
 Neural Computation
, 2000
"... Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strate ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of (i) narrow tuning in the dimension to be encoded to increase the singleneuron Fisher information, and (ii) broad tuning in all other dimensions to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated which yield a criterion for the function of a neural population based on the measured tuning curves. 1 Introduction The question...
Visual Motion Analysis for Pursuit Eye Movements in Area MT of Macaque Monkeys
 Journal of Neuroscience
, 1999
"... this paper have been published previously (Movshon et al., 1990; Lisberger and Movshon, 1991, 1994; Lisberger et al., 1995) ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
this paper have been published previously (Movshon et al., 1990; Lisberger and Movshon, 1991, 1994; Lisberger et al., 1995)
The Use of a Bayesian Neural Network Model for Classification Tasks
, 1997
"... This thesis deals with a Bayesian neural network model. The focus is on how to use the model for automatic classification, i.e. on how to train the neural network to classify objects from some domain, given a database of labeled examples from the domain. The original Bayesian neural network is a one ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
This thesis deals with a Bayesian neural network model. The focus is on how to use the model for automatic classification, i.e. on how to train the neural network to classify objects from some domain, given a database of labeled examples from the domain. The original Bayesian neural network is a onelayer network implementing a naive Bayesian classifier. It is based on the assumption that different attributes of the objects appear independently of each other. This work has been aimed at extending the original Bayesian neural network model, mainly focusing on three different aspects. First the model is extended to a multilayer network, to relax the independence requirement. This is done by introducing a hidden layer of complex columns, groups of units which take input from the same set of input attributes. Two different types of complex column structures in the hidden layer are studied and compared. An information theoretic measure is used to decide which input attributes to consider toget...
Traveling waves and the processing of weakly tuned inputs in a cortical network module
 J. Comput. Neurosci
, 1997
"... Abstract. Recent studies have shown that local cortical feedback can have an important effect on the response of neurons in primary visual cortex to the orientation of visual stimuli. In this work, we study the role of the cortical feedback in shaping the spatiotemporal patterns of activity in corte ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Abstract. Recent studies have shown that local cortical feedback can have an important effect on the response of neurons in primary visual cortex to the orientation of visual stimuli. In this work, we study the role of the cortical feedback in shaping the spatiotemporal patterns of activity in cortex. Two questions are addressed: one, what are the limitations on the ability of cortical neurons to lock their activity to rotating oriented stimuli within a single receptive field? Two, can the local architecture of visual cortex lead to the generation of spontaneous traveling pulses of activity? We study these issues analytically by a populationdynamic model of a hypercolumn in visual cortex. The order parameter that describes the macroscopic behavior of the network is the timedependent population vector of the network. We first study the network dynamics under the influence of a weakly tuned input that slowly rotates within the receptive field. We show that if the cortical interactions have strong spatial modulation, the network generates a sharply tuned activity profile that propagates across the hypercolumn in a path that is completely locked to the stimulus rotation. The resultant rotating population vector maintains a constant angular lag relative to the stimulus, the magnitude of which grows with the stimulus rotation frequency. Beyond a critical frequency the population vector does not lock to the stimulus but executes a quasiperiodic motion with an average frequency that is smaller than that of the stimulus. In the second part we consider the stable intrinsic state of the cortex under the influence of isotropic stimulation. We show that if the local inhibitory feedback is sufficiently strong, the network does not settle into a
Population Coding with Correlation and an Unfaithful Model
 Neural Computation
, 2001
"... The present study investigates a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known, or because a simplified ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
The present study investigates a population decoding paradigm in which the maximum likelihood inference is based on an unfaithful decoding model (UMLI). This is usually the case for neural population decoding because the encoding process of the brain is not exactly known, or because a simplified decoding model is preferred for saving computational cost. We consider an unfaithful decoding model which neglects the pairwise correlation between neuronal activities, and prove that UMLI is asymptotically efficient when the neuronal correlation is uniform or of limitedrange. The performance of UMLI is compared with that of the maximum likelihood inference based on a faithful model and that of the center of mass decoding method. It turns out that UMLI has advantages of decreasing the computational complexity remarkablely and maintaining a highlevel decoding accuracy at the same time. Moreover, UMLI can be implemented by a biologically feasible recurrent network (Pouget et al., ...
Cognitive Navigation Based on Nonuniform Gabor Space Sampling, Unsupervised Growing Networks, and Reinforcement Learning
, 2004
"... We study spatial learning and navigation for autonomous agents. A state space representation is constructed by unsupervised Hebbian learning during exploration. As a result of learning, a representation of the continuous twodimensional (2D) manifold in the highdimensional input space is found. Th ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We study spatial learning and navigation for autonomous agents. A state space representation is constructed by unsupervised Hebbian learning during exploration. As a result of learning, a representation of the continuous twodimensional (2D) manifold in the highdimensional input space is found. The representation consists of a population of localized overlapping place fields covering the 2D space densely and uniformly. This space coding is comparable to the representation provided by hippocampal place cells in rats. Place fields are learned by extracting spatiotemporal properties of the environment from sensory inputs. The visual scene is modeled using the responses of modified Gabor filters placed at the nodes of a sparse Logpolar graph. Visual sensory aliasing is eliminated by taking into account selfmotion signals via path integration. This solves the hidden state problem and provides a suitable representation for applying reinforcement learning in continuous space for action selection. A temporaldifference prediction scheme is used to learn sensorimotor mappings to perform goaloriented navigation. Population vector coding is employed to interpret ensemble neural activity. The model is validated on a mobile Khepera miniature robot.
Neural representation of probabilistic information
 Neural Computation
, 2003
"... It has been proposed that populations of neurons process information in terms of probability density functions (PDFs) of analog variables. Such analog variables range, for example, from target luminance and depth on the sensory interface to eye position and joint angles on the motor output side. The ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
It has been proposed that populations of neurons process information in terms of probability density functions (PDFs) of analog variables. Such analog variables range, for example, from target luminance and depth on the sensory interface to eye position and joint angles on the motor output side. The requirement that analog variables must be processed leads inevitably to a probabilistic description, while the limited precision and lifetime of the neuronal processing units leads naturally to a population representation of information. We show how a timedependent probability density ρ(x; t) over variable x, residing in a specified function space of dimension D, may be decoded from the neuronal activities in a population as a linear combination of certain decoding functions φi(x), with coefficients given by the N firing rates ai(t) (generally with D << N). We show how the neuronal encoding process may be described by projecting a set of complementary encoding functions ˆ φi(x) on the probability density ρ(x; t), and passing the result through a rectifying nonlinear activation function. We show how both encoders ˆ φi(x) and decoders φi(x) may be determined by minimizing cost functions that quantify the inaccuracy of the representation. Expressing a given computation in terms of manipulation and transformation of probabilities, we show how this representation leads to a neural circuit that can carry out the required computation within a consistent Bayesian framework, with the synaptic weights being explicitly generated in terms of encoders, decoders, conditional probabilities, and priors.