Results 1 
8 of
8
Networks of Spiking Neurons: The Third Generation of Neural Network Models
 Neural Networks
, 1997
"... The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powe ..."
Abstract

Cited by 138 (12 self)
 Add to MetaCart
The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology. 1 Definitions and Motivations If one classifies neural network models according to their computational units, one can distinguish three different generations. The first generation i...
Fast Sigmoidal Networks via Spiking Neurons
 Neural Computation
, 1997
"... We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing ..."
Abstract

Cited by 52 (8 self)
 Add to MetaCart
We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons), rather than on the traditional interpretation of analog variables in terms of firing rates. The resulting new simulation is substantially faster and hence more consistent with experimental results about the maximal speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are "universal approximators" in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. Our new proposal for the possible organiza...
On computation with pulses
 Information and Computation
, 1999
"... We explore the computational power of formal models for computation with pulses. Such models are motivated by realistic models for biological neurons, and by related new types of VLSI (\pulse stream VLSI"). In preceding work it was shown that the computational power of formal models for computa ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We explore the computational power of formal models for computation with pulses. Such models are motivated by realistic models for biological neurons, and by related new types of VLSI (\pulse stream VLSI"). In preceding work it was shown that the computational power of formal models for computation with pulses is quite high if the pulses arriving at a computational unit have an approximately linearly rising or linearly decreasing initial segment. This property is satis ed by common models for biological neurons. On the other hand several implementations of pulse stream VLSI employ pulses that are approximately piecewise constant (i.e. step functions). In this article we investigate the relevance of the shape of pulses in formal models for computation with pulses. It turns out that the computational power drops signi cantly if one replaces pulses with linearly rising or decreasing initial segments by piecewise constant pulses. We provide an exact characterization of the latter model in terms of a weak version of a random access machine (RAM). We also compare the language recognition capability of a recurrent version of this model with that of deterministic nite automata and Turing machines. 1
An Efficient Implementation of Sigmoidal Neural Nets in Temporal Coding with Noisy Spiking Neurons
, 1995
"... We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons), rather than on the traditional interpretation of analog variables in terms of firing rates. The resulting new simulation is substantially faster and hence more consistent with experimental results about the maximal speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are "universal approximators" in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. Our new proposal for the possible organiza...
Analog CMOS Velocity Sensors
 In Electronic Imagin'97
, 1997
"... A family of analog CMOS velocity sensors is described which measures the velocity of a moving edge by computing its time of travel between adjacent pixels. These sensors are compact, largely invariant to illumination over a wide range, sensitive to edges with very low contrast, and responsive to sev ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
A family of analog CMOS velocity sensors is described which measures the velocity of a moving edge by computing its time of travel between adjacent pixels. These sensors are compact, largely invariant to illumination over a wide range, sensitive to edges with very low contrast, and responsive to several orders of magnitude of velocity. Two successful onedimensional velocity sensors are described in detail; preliminary data on a new twodimensional velocity sensor is shown. Work in progress to extend these sensors to processing of twodimensional optical flow is outlined. Keywords: Analog VLSI, CMOS, velocity sensors, motion sensors, optical flow 1. INTRODUCTION Motion is a key component of the visual scene; it is used very effectively by biological organisms from insects up to primates. Motion can be an essential clue to segmenting an object from the background and is very important in determining selfmotion. There are clear uses for motion information in robotics, automotive navig...
The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials
, 1996
"... Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Recently one has started to investigate the computational power of spiking neurons (also called "integrate and fire neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatiotemporal coding, i.e. encoded in the time points when specific neurons "fire" (and thus send a "spike" to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are "accidental" aspects of their realization in biological "wetware". Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatiotemporal c...
!()+, ./01 23456
, 1995
"... Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, ..."
Abstract
 Add to MetaCart
Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy problems. 1 Introduction The main theme of this paper is to present efficient algorithms that solve the problem of computing the maximum bichromatic discrepancy for axis oriented rectangles. This problem arises naturally in different areas of computer science, such as computational learning theory, computational geometry and computer graphics ([Ma], [DG]), and has applications in all these areas. In computational learning theory, the problem of agnostic PAClearning with simple geometric hypotheses can be reduced to the problem of computing the maximum bichromatic discrepancy for simple geometric ra...
Analogue Architectures for Vision: Cellular Neural Networks and Neuromorphic Circuits
, 1999
"... Vision machines based on actual computational methods require the development of simple lowlevel feature detectors. The lowlevel feature detectors measure local image properties as scale, orientation, and velocity. Analog VLSI devices that mimic some functionality of biological systems appear to b ..."
Abstract
 Add to MetaCart
Vision machines based on actual computational methods require the development of simple lowlevel feature detectors. The lowlevel feature detectors measure local image properties as scale, orientation, and velocity. Analog VLSI devices that mimic some functionality of biological systems appear to be robust, lowpower consuming, and fast enough to solve vision problems in real time.