Results 1 
6 of
6
Lower Bounds for the Computational Power of Networks of Spiking Neurons
 NEURAL COMPUTATION
, 1995
"... We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spiketrains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network o ..."
Abstract

Cited by 68 (16 self)
 Add to MetaCart
We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spiketrains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response and thresholdfunctions of the spiking neurons are sufficient in order to employ them for such computations.
On the computational complexity of networks of spiking neurons
 Advances in Neural Information Processing Systems
, 1995
"... 2 Abstract We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spiketrains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a sma ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
2 Abstract We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phasedifferences between spiketrains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response and thresholdfunctions of the spiking neurons are sufficient in order to employ them for such computations. Furthermore we prove upper bounds for the computational power of networks of spiking neurons with arbitrary piecewise linear responseand thresholdfunctions, and show that they are with regard to realtime simulations computationally equivalent to a certain type of random access machine, and to recurrent analog neural nets with piecewise linear activation functions. In addition we give corresponding results for networks of spiking neurons with a limited timing precision, and we prove upper and lower bounds for the VCdimension and pseudodimension of networks of spiking neurons. 3 1
On the Relevance of the Shape of Postsynaptic Potentials for the Computational Power of Spiking Neurons
 Proc. of the International Conference on Artificial Neural Networks (ICANN
, 1995
"... The firing of a neuron in a biological neural system causes in certain other neurons excitatory postsynaptic potential changes (EPSP's) that are not "rectangular", but have the form of a smooth hill. We prove in this article for a formal model of a network of spiking neurons, that the ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
The firing of a neuron in a biological neural system causes in certain other neurons excitatory postsynaptic potential changes (EPSP's) that are not "rectangular", but have the form of a smooth hill. We prove in this article for a formal model of a network of spiking neurons, that the rising respectively declining segments of these EPSP's are in fact essential for the computational power of the model. 1 Introduction Apparently all computations in biological neural systems are realized through sequences of firings of neurons as a result of incoming postsynaptic potentials, see e.g. (Kandel et al., 1991). Each firing of a neuron in a biological neural system causes excitatory or inhibitory postsynaptic potentials (EPSP's respectively IPSP's) in those other neurons to which it is connected by synapses. A neuron fires if the sum of its incoming postsynaptic potentials becomes larger than its current threshold (which depends on the time of its last previous firing) . Recently one has also ...
Analog computations on networks of spiking neurons (Extended Abstract)
, 1996
"... We characterize the class of functions with realvalued input and output which can be computed by networks of spiking neurons with piecewise linear response and thresholdfunctions and unlimited timing precision. We show that this class coincides with the class of functions computable by recurrent ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
We characterize the class of functions with realvalued input and output which can be computed by networks of spiking neurons with piecewise linear response and thresholdfunctions and unlimited timing precision. We show that this class coincides with the class of functions computable by recurrent analog neural nets with piecewise linear activation functions, and with the class of functions computable on a certain type of random access machine (NRAM) which we introduce in this article. This result is proven via constructive realtime simulations. Hence it provides in particular a convenient method for constructing networks of spiking neurons that compute a given realvalued function f: it now suces to write a program for computing f on an NRAM; that program can be "automatically " transformed into an equivalent network of spiking neurons (by our simulation result). Finally, one learns from the results of this paper that certain very simple piecewise linear response and thresholdfunctions for spiking neurons are universal, in the sense that neurons with these particular responseand thresholdfunctions can simulate networks of spiking neurons with arbitrary piecewise linear response and thresholdfunctions. The results of this paper also show that certain very simple piecewise linear activation functions are in a corresponding sense universal for recurrent analog neural nets.
The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials
, 1996
"... Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neu ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatiotemporal coding, i.e. encoded in the time points when specific neurons "fire" (and thus send a "spike" to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are "accidental" aspects of their realization in biological "wetware". Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatiotemporal c...
!()+, ./01 23456
, 1995
"... Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, ..."
Abstract
 Add to MetaCart
Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy problems. 1 Introduction The main theme of this paper is to present efficient algorithms that solve the problem of computing the maximum bichromatic discrepancy for axis oriented rectangles. This problem arises naturally in different areas of computer science, such as computational learning theory, computational geometry and computer graphics ([Ma], [DG]), and has applications in all these areas. In computational learning theory, the problem of agnostic PAClearning with simple geometric hypotheses can be reduced to the problem of computing the maximum bichromatic discrepancy for simple geometric ra...