Results 1  10
of
27
An experimental unification of reservoir computing methods
, 2007
"... Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) lea ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics. Next, we introduce a new measure for the reservoir dynamics based on Lyapunov exponents. Unlike previous measures in the literature, this measure is dependent on the dynamics of the reservoir in response to the inputs, and in the cases we tried, it indicates an optimal value for the global scaling of the weight matrix, irrespective of the standard measures. We also describe the Reservoir Computing Toolbox that was used for these experiments, which implements all the types of Reservoir Computing and allows the easy simulation of a wide range of reservoir topologies for a number of benchmarks.
An overview of reservoir computing: theory, applications and implementations
 Proceedings of the 15th European Symposium on Artificial Neural Networks
, 2007
"... Abstract. Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. Stateoftheart performance can easily be achieved with this setup, called Reservoir Computin ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Abstract. Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. Stateoftheart performance can easily be achieved with this setup, called Reservoir Computing. The idea can even be broadened by stating that any high dimensional, driven dynamic system, operated in the correct dynamic regime can be used as a temporal ‘kernel ’ which makes it possible to solve complex tasks using just linear postprocessing techniques. This tutorial will give an overview of current research on theory, application and implementations of Reservoir Computing. 1
Generative modeling of autonomous robots and their environments using reservoir computing
 Neural Processing Letters
, 2007
"... Abstract. Autonomous mobile robots form an important research topic in the field of robotics due to their nearterm applicability in the real world as domestic service robots. These robots must be designed in an efficient way using training sequences. They need to be aware of their position in the e ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. Autonomous mobile robots form an important research topic in the field of robotics due to their nearterm applicability in the real world as domestic service robots. These robots must be designed in an efficient way using training sequences. They need to be aware of their position in the environment and also need to create models of it for deliberative planning. These tasks have to be performed using a limited number of sensors with low accuracy, as well as with a restricted amount of computational power. In this contribution we show that the recently emerged paradigm of Reservoir Computing (RC) is very well suited to solve all of the above mentioned problems, namely learning by example, robot localization, map and path generation. Reservoir Computing is a technique which enables a system to learn any timeinvariant filter of the input by training a simple linear regressor that acts on the states of a highdimensional but random dynamic system excited by the inputs. In addition, RC is a simple technique featuring ease of training, and low computational and memory demands. Keywords: reservoir computing, generative modeling, map learning, Tmaze task, road sign problem, path generation 1.
Reservoir Computing with Stochastic Bitstream Neurons
 In Proceedings of the 16th Annual ProRISC Workshop
, 2005
"... Reservoir Computing (RC) [6], [5], [9] is a computational framework with powerful properties and several interesting advantages compared to conventional techniques for pattern recognition. It consists essentially of two parts: a recurrently connected network of simple interacting nodes (the reservoi ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Reservoir Computing (RC) [6], [5], [9] is a computational framework with powerful properties and several interesting advantages compared to conventional techniques for pattern recognition. It consists essentially of two parts: a recurrently connected network of simple interacting nodes (the reservoir), and a readout function that observes the reservoir and computes the actual output of the system. The choice of the nodes that form the reservoir is very broad: spiking neurons [6], threshold logic gates [7] and sigmoidal neurons [5], [9] have been used. For this article, we will use analogue neurons to build an RCsystem on a Field Programmable Gate Array (FPGA), which is a chip that can be reconfigured. A traditional neuron calculates a weighted sum of its inputs, which is then fed through a nonlinearity (like a threshold or sigmoid function). This is not hardware efficient due to the extensive use of multiplications. In [2], a type of neuron is introduced that communicates using stochastic bitstreams instead of fixedpoint values. This drastically simplifies the hardware implementation of arithmetic operations such as addition, the nonlinearity and multiplication. We have built an implementation of RC on FPGA, using these stochastic neurons.
Improving reservoirs using Intrinsic Plasticity
, 2007
"... The benefits of using Intrinsic Plasticity (IP), an unsupervised, local, biologically inspired adaptation rule that tunes the probability density of a neuron’s output towards an exponential distribution – thereby realizing an information maximization – have already been demonstrated. In this work, w ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The benefits of using Intrinsic Plasticity (IP), an unsupervised, local, biologically inspired adaptation rule that tunes the probability density of a neuron’s output towards an exponential distribution – thereby realizing an information maximization – have already been demonstrated. In this work, we extend the ideas of this adaptation method to a more commonly used nonlinearity and a Gaussian output distribution. After deriving the learning rules, we show the effects of the bounded output of the transfer function on the moments of the actual output distribution. This allows us to show that the rule converges to the expected distributions, even in random recurrent networks. The IP rule is evaluated in a Reservoir Computing setting, which is a temporal processing technique which uses random, untrained recurrent networks as excitable media, where the network’s state is fed to a linear regressor used to calculate the desired output. We present an experimental comparison of the different IP rules on three benchmark tasks with different characteristics. Furthermore, we show that this unsupervised reservoir adaptation is able to adapt networks with very constrained topologies, such as a 1D lattice which generally shows quite unsuitable dynamic behavior, to a reservoir that can be used to solve complex tasks. We clearly demonstrate that IP is able to make Reservoir Computing more robust: the internal dynamics can autonomously tune themselves – irrespective of initial weights or input scaling – to the dynamic regime which is optimal for a given task.
The unified Reservoir Computing concept and its digital hardware implementations
 In Proceedings of the 2006 EPFL LATSIS Symposium
, 2006
"... that computes many random nonlinear combinations of the current and past inputs, with a finite memory. The memoryless readout function is then able to linearly combine this information to compute the actual output. The three methods cited above all take a different approach towards the idea of usin ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
that computes many random nonlinear combinations of the current and past inputs, with a finite memory. The memoryless readout function is then able to linearly combine this information to compute the actual output. The three methods cited above all take a different approach towards the idea of using a RNN as reservoir. The LSM offers different types of reservoirs, built from very simple nodes like threshold logic gates, to complex and biologically realistic Leaky Integrate & Fire neurons. Echo State Networks are built from sigmoidal neurons and are more directed towards practi Corresponding author: david.verstraeten@ugent.be. cal applicability. BPDC originates from a very different approach: a mathematically derived training rule for RNNs was found to also lead to the concept of a reservoir, because when applying the rule it appears that only the connections of the output neurons are adjusted. The combination of our research with results from other groups seems to indicate that the
D.: Experiments with reservoir computing on the road sign problem
 In: Brazilian Congress on Neural Networks (CBRN) (2007
"... Abstract — The road sign problem is tackled in this work with Reservoir Computing (RC) networks. These networks are made of a fixed recurrent neural network where only a readout layer is trained. In the road sign problem, an agent has to decide at some point in time which action to take given releva ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract — The road sign problem is tackled in this work with Reservoir Computing (RC) networks. These networks are made of a fixed recurrent neural network where only a readout layer is trained. In the road sign problem, an agent has to decide at some point in time which action to take given relevant information gathered in the past. We show that RC can handle simple and complex Tmaze tasks (which are a subdomain of the road sign problem). Keywords — Reservoir Computing, road sign problem, Tmaze, longterm memory. 1
Computing with Spiking Neuron Networks
"... Abstract Spiking Neuron Networks (SNNs) are often referred to as the 3 rd generation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an accurate modeling of synaptic interactions between neuron ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract Spiking Neuron Networks (SNNs) are often referred to as the 3 rd generation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an accurate modeling of synaptic interactions between neurons, taking into account the time of spike firing. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic eventdriven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation. Today, the main challenge is to discover efficient learning rules that might take advantage of the specific features of SNNs while keeping the nice properties (generalpurpose, easytouse, available simulators, etc.) of traditional connectionist models. This chapter relates the history of the “spiking neuron ” in Section 1 and summarizes the most currentlyinuse models of neurons and synaptic plasticity in Section 2. The computational power of SNNs is addressed in Section 3 and the problem of learning in networks of spiking neurons is tackled in Section 4, with insights into the tracks currently explored for solving it. Finally, Section 5 discusses application domains, implementation issues and proposes several simulation frameworks.
Benchmarking Reservoir Computing on TimeIndependent Classification Tasks
"... Abstract — This paper presents an extensive evaluation of reservoir computing for the case of classification problems that do not depend on time. We discuss how it is possible to adapt the reservoir approach to learning for the case of static classification problems. Then we present a set of experim ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract — This paper presents an extensive evaluation of reservoir computing for the case of classification problems that do not depend on time. We discuss how it is possible to adapt the reservoir approach to learning for the case of static classification problems. Then we present a set of experiments against KPLS, MLP with entropic cost function and LSSVM showing that this approach is quite competitive and has the advantage of having only one parameter to be chosen. I.