Results 1  10
of
48
Dimensions of neuralsymbolic integration – a structural survey
 We Will Show Them: Essays in Honour of Dov Gabbay
"... Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to
Twenty Six research Topics About Spiking Neural P Systems
 In [19
, 2006
"... To continue the tradition of the previous brainstorming weeks on membrane computing, I am collecting here a series of open problems and research topics, not about membrane computing in general, but about one of the directions of research which were pretty much investigated in the last year: spiking ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
To continue the tradition of the previous brainstorming weeks on membrane computing, I am collecting here a series of open problems and research topics, not about membrane computing in general, but about one of the directions of research which were pretty much investigated in the last year: spiking neural P systems. In general, one mentions issues which look of a broader nature, but also some precise problems are formulated. As usual with such lists of problems, the selection is subjective, by no means exhaustive. Of course, choosing only problems related to spiking neural P systems does not mean that there are no longer enough problems waiting to be solved in the general framework of membrane computing – on contrarily (e.g., separate lists can refer to computational complexity issues, to dynamical systems approaches, etc.), but such problems tend to become rather specialized and technical at the present stage of the development of membrane computing. Instead, the membrane computing models with a neural inspiration are at the beginning of a systematic exploration, and, as claimed below, this area of research looks very promising. 2
Spiking Neural Networks, an Introduction
, 2003
"... This paper gives an introduction to spiking neural networks, some biological background, and will present two models of spiking neurons that employ pulse coding. Networks of spiking neurons are more powerful than their nonspiking predecessors as they can encode temporal information in their sign ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
This paper gives an introduction to spiking neural networks, some biological background, and will present two models of spiking neurons that employ pulse coding. Networks of spiking neurons are more powerful than their nonspiking predecessors as they can encode temporal information in their signals, but therefore do also need dilferent and biologically more plausible rules for synaptic plasticity
An Overview Of The Computational Power Of Recurrent Neural Networks
 Proceedings of the 9th Finnish AI Conference STeP 2000{Millennium of AI, Espoo, Finland (Vol. 3: &quot;AI of Tomorrow&quot;: Symposium on Theory, Finnish AI Society
, 2000
"... INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. His ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
INTRODUCTION The two main streams of neural networks research consider neural networks either as a powerful family of nonlinear statistical models, to be used in for example pattern recognition applications [6], or as formal models to help develop a computational understanding of the brain [10]. Historically, the brain theory interest was primary [32], but with the advances in computer technology, the application potential of the statistical modeling techniques has shifted the balance. 1 The study of neural networks as general computational devices does not strictly follow this division of interests: rather, it provides a general framework outlining the limitations and possibilities aecting both research domains. The prime historic example here is obviously Minsky's and Papert's 1969 study of the computational limitations of singlelayer perceptrons [34], which was a major inuence in turning away interest from neural network learning to symbolic AI techniques for more
F.: Comparison of supervised learning methods for spike time coding in spiking neural networks
 and Computer Science, University of
, 2006
"... In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may b ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods? In order to answer this question, we discuss various approaches to the learning task considered. We shortly describe the particular learning algorithms and report the results of experiments. Finally, we discuss the properties, assumptions and limitations of each method. We complete this review with a comprehensive list of pointers to the literature.
Hebbian SpikeTiming Dependent SelfOrganization in Pulsed Neural Networks
 In Proceedings of World Congress on Neuroinformatics
, 2001
"... We present a mechanism of unsupervised competitive learning and development of topology preserving selforganizing maps of spiking neurons. The information encoding is based on the precise timing of single spike events. The work provides a competitive learning algorithm that is based on the relative ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We present a mechanism of unsupervised competitive learning and development of topology preserving selforganizing maps of spiking neurons. The information encoding is based on the precise timing of single spike events. The work provides a competitive learning algorithm that is based on the relative timing of the pre and postsynaptic spikes, local synapse competitions within a single neuron and global competition via lateral connections. Furthermore, we present part of the experimental work on the capability of the suggested mechanism to perform topology preserving mapping and competitive learning. The results show that our model covers the main characteristic behaviour of the standard SOM but uses a computationally more powerful timingdependent spike encoding.
Spiking Neural P Systems. Recent Results, Research Topics
"... Summary. After a quick introduction of spiking neural P systems (a class of P systems inspired from the way neurons communicate by means of spikes, electrical impulses of identical shape), and presentation of typical results (in general equivalence with Turing machines as number computing devices, b ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Summary. After a quick introduction of spiking neural P systems (a class of P systems inspired from the way neurons communicate by means of spikes, electrical impulses of identical shape), and presentation of typical results (in general equivalence with Turing machines as number computing devices, but also other issues, such as the possibility of handling strings or infinite sequences), we present a long list of open problems and research topics in this area, also mentioning recent attempts to address some of them. The bibliography completes the information offered to the reader interested in this research area. 1
Complexity of Learning for Networks of Spiking Neurons with Nonlinear Synaptic Interactions
 PROC. OF THE TENTH ANNUAL CONFERENCE ON COMPUTATIONAL LEARNING THEORY, ACM
, 2001
"... We study model networks of spiking neurons where synaptic inputs interact in terms of nonlinear functions. These nonlinearities are used to represent the spatial grouping of synapses on the dendrites and to model the computations performed at local branches. We analyze the complexity of learning ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We study model networks of spiking neurons where synaptic inputs interact in terms of nonlinear functions. These nonlinearities are used to represent the spatial grouping of synapses on the dendrites and to model the computations performed at local branches. We analyze the complexity of learning in these networks in terms of the VC dimension and the pseudo dimension. Polynomial upper bounds on these dimensions are derived for various types of synaptic nonlinearities.
Poutré. Spikeprop: backpropagation for networks of spiking neurons
 Proc. ESANN’2000
, 2000
"... Abstract. For a network of spiking neurons with reasonable postsynaptic potentials, we derive a supervised learning rule akin to traditional errorbackpropagation, SpikeProp and show how to overcome the discontinuities introduced by thresholding. Using this learning algorithm, we demonstrate how ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract. For a network of spiking neurons with reasonable postsynaptic potentials, we derive a supervised learning rule akin to traditional errorbackpropagation, SpikeProp and show how to overcome the discontinuities introduced by thresholding. Using this learning algorithm, we demonstrate how networks of spiking neurons with biologically plausible timeconstants can perform complex nonlinear classication in fast temporal coding just as well as ratecoded networks. When comparing the (implicit) number of neurons required for the respective encodings, it is empirically demonstrated that temporal coding potentially requires signi cantly less neurons. 1