Results 1  10
of
17
Networks of Spiking Neurons: The Third Generation of Neural Network Models
, 1996
"... The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powe ..."
Abstract

Cited by 191 (14 self)
 Add to MetaCart
The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.
Impact of Correlated Inputs on the Output of the Integrateandfire Model
, 1999
"... For the integrateandfire model with or without reversal potentials, we consider how correlated inputs affect the variability of cellular output. For both models the variability of efferent spike trains measured by coefficient of variation of the interspike interval (abbreviated to CV in the remain ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
For the integrateandfire model with or without reversal potentials, we consider how correlated inputs affect the variability of cellular output. For both models the variability of efferent spike trains measured by coefficient of variation of the interspike interval (abbreviated to CV in the remainder of the paper) is a nondecreasing function of input correlation. When the correlation coefficient is greater than 0.09, the CV of the integrateandfire model without reversal potentials is always above 0.5, no matter how strong the inhibitory inputs. When the correlation coefficient is greater than 0.05, CV for the integrateandfire model with reversal potentials is always above 0.5, independent of the strength of the inhibitory inputs. Under a given condition on correlation coefficients we find that correlated Poisson processes can be decomposed into independent Poisson processes. We also develop a novel method to estimate the distribution density of the first passage time of the integ...
SelfOrganization of Spiking Neurons Using Action Potential Timing
, 1998
"... We propose a mechanism for unsupervised learning in networks of spiking neurons which is based on the timing of single firing events. Our results show that a topology preserving behaviour quite similar to that of Kohonen's selforganizing map can be achieved using temporal coding. In contrast t ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
We propose a mechanism for unsupervised learning in networks of spiking neurons which is based on the timing of single firing events. Our results show that a topology preserving behaviour quite similar to that of Kohonen's selforganizing map can be achieved using temporal coding. In contrast to previous approaches, which use rate coding, the winner among competing neurons can be determined fast and locally. Our model is a further step towards a more realistic description of unsupervised learning in biological neural systems. Furthermore, it may provide a basis for fast implementations in pulsed VLSI. Keywords Selforganizing map, spiking neurons, temporal coding, unsupervised learning. I. Introduction In the area of modelling information processing in biological neural systems, there is an ongoing debate about which essentials have to be taken into account (see e.g. [1], [2], [3], [4]). Discrete models, such as threshold gates or McCullochPitts neurons, are undoubtedly very simplis...
On computation with pulses
 Information and Computation
, 1999
"... We explore the computational power of formal models for computation with pulses. Such models are motivated by realistic models for biological neurons, and by related new types of VLSI (\pulse stream VLSI"). In preceding work it was shown that the computational power of formal models for com ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We explore the computational power of formal models for computation with pulses. Such models are motivated by realistic models for biological neurons, and by related new types of VLSI (\pulse stream VLSI&quot;). In preceding work it was shown that the computational power of formal models for computation with pulses is quite high if the pulses arriving at a computational unit have an approximately linearly rising or linearly decreasing initial segment. This property is satis ed by common models for biological neurons. On the other hand several implementations of pulse stream VLSI employ pulses that are approximately piecewise constant (i.e. step functions). In this article we investigate the relevance of the shape of pulses in formal models for computation with pulses. It turns out that the computational power drops signi cantly if one replaces pulses with linearly rising or decreasing initial segments by piecewise constant pulses. We provide an exact characterization of the latter model in terms of a weak version of a random access machine (RAM). We also compare the language recognition capability of a recurrent version of this model with that of deterministic nite automata and Turing machines. 1
An Efficient Implementation of Sigmoidal Neural Nets in Temporal Coding with Noisy Spiking Neurons
, 1995
"... We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons), rather than on the traditional interpretation of analog variables in terms of firing rates. The resulting new simulation is substantially faster and hence more consistent with experimental results about the maximal speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are "universal approximators" in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. Our new proposal for the possible organiza...
Computing and Learning with Spiking Neurons  Theory and Simulations
, 1998
"... There is strong evidence that biological neurons encode through their firing information not only in the firing rate but also in the timing of single spikes. This thesis explores various ways for computing and learning with networks of simplified spiking neurons (essentially of the leaky integratea ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
There is strong evidence that biological neurons encode through their firing information not only in the firing rate but also in the timing of single spikes. This thesis explores various ways for computing and learning with networks of simplified spiking neurons (essentially of the leaky integrateandfire type) using temporal coding. We present both supervised and unsupervised learning rules, which are Hebbian in the sense that the strength of a synapse is modified if a pre and a postsynaptic spike arrive at the synapse within a certain learning window. Recent neurobiological findings have confirmed such a dependency and have shown that the sign and strength of the change depends on the timing of the two spikes. On the basis of these principles we show how methods originally designed for artificial neural networks like competitive learning, selforganizing behavior and radial basis functions can be realized within this context. We also address the question whether these results still...
The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials
, 1996
"... Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neu ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatiotemporal coding, i.e. encoded in the time points when specific neurons "fire" (and thus send a "spike" to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are "accidental" aspects of their realization in biological "wetware". Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatiotemporal c...
Perceptive, nonlinear speech processing and spiking neural networks
 in G. Chollet et al. (Eds.) Nonlinear Speech Modeling, LNAI 3445
"... Abstract. Source separation and speech recognition are very difficult in the context of noisy and corrupted speech. Most conventional techniques need huge databases to estimate speech (or noise) density probabilities to perform separation or recognition. We discuss the potential of perceptive speech ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Source separation and speech recognition are very difficult in the context of noisy and corrupted speech. Most conventional techniques need huge databases to estimate speech (or noise) density probabilities to perform separation or recognition. We discuss the potential of perceptive speech analysis and processing in combination with biologically plausible neural networks processors. We illustrate the potential of such nonlinear processing of speech on two applications. The first is a source separation system inspired by Auditory Scene Analysis paradigm and the second is a crude spoken digit recogniser. We present preliminary results and discuss them.
SelfOrganization in Networks of Spiking Neurons
"... Traditionally artificial neural network design was based on average firing rate model of a biological neuron. In this paper we briefly review approaches based on single action potentials. Then we describe a selforganizing neural network based on a spiking neuron model. We show how this network of l ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Traditionally artificial neural network design was based on average firing rate model of a biological neuron. In this paper we briefly review approaches based on single action potentials. Then we describe a selforganizing neural network based on a spiking neuron model. We show how this network of laterally connected spiking neurons selforganizes into a topological map in response to external stimulation. Two unsupervised, activitydependent, instantaneous learning rules for adjusting the synaptic efficacies are introduced and analyzed in the paper. The first rule is based on the postsynaptic contribution of the presynaptic neuron, while the second, called temporal correlation ruleis based only on the time difference between the pre and postsynaptic neuron spikes. Interestingly applying either of the two learning rules results in a wellsegregated map, where the afferent connections from nonactive input neurons dieoff and the lateral connections gradually restrict themselves into one of the regions of the formed map. We also draw an analogy between our temporal correlation learning rule and Kohonen's SOM learning rule.