Results 1 
2 of
2
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
, 1989
"... The exact form of a gradientfollowing learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Abstract

Cited by 438 (4 self)
 Add to MetaCart
The exact form of a gradientfollowing learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms are shown to allow networks having recurrent connections to learn complex tasks requiring the retention of information over time periods having either fixed or indefinite length. 1 Introduction A major problem in connectionist theory is to develop learning algorithms that can tap the full computational power of neural networks. Much progress has been made with feedforward networks, and attention has recently turned to developing algorithms for networks with recurrent connections, wh...
NACAP 2009 – EXTENDED ABSTRACT Putnamizing the Liquid State
"... Echo state networks, liquid state machines, context reverberation networks – these all perform what has come to be known as “reservoir computing. ” This approach to neural computation has been getting much attention lately. I wish to show that not only is it of scientific and technological interest, ..."
Abstract
 Add to MetaCart
Echo state networks, liquid state machines, context reverberation networks – these all perform what has come to be known as “reservoir computing. ” This approach to neural computation has been getting much attention lately. I wish to show that not only is it of scientific and technological interest, but of philosophical interest as well. It is not so much that reservoir computing raises new philosophical problems, but that it casts a quartercentury old debate about how physical systems implement computations (arising from arguments made by Putnam and Searle) in a vivid new context. I set the stage by doing a quick summary of reservoir computing. I then turn to Hilary Putnam’s “Theorem ” in an appendix to his Representation and Reality, and follow it as it is recast in the subsequent work of Chalmers, Scheutz and Joslin. I then consider the dynamical systems used in reservoir computing as a kind of prototype for the dynamics referred to by these philosophers. This leads to the general notion of a dynamical system interpreting another dynamical system. In reservoir computing we see the high dimensional dynamics of physical systems (even literally buckets of water) harnessed to create context for inputs to a connectionist network. This construction proceeds in a way analogous to Putnam’s construction in the proof of his theorem. Perhaps a rock cannot simulate any finite state automaton, but, in some sense, a “reservoir ” can!