Results 1  10
of
10
Dynamical Stability Conditions for Recurrent Neural Networks with Unsaturating Piecewise Linear Transfer Functions
 NEURAL COMPUTATION
, 2001
"... We establish two conditions which ensure the nondivergence of additive recurrent networks with unsaturating piecewise linear transfer functions, also called linear threshold or semilinear transfer functions. As was recently shown by Hahnloser et al. (2000), networks of this type can be efficient ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
We establish two conditions which ensure the nondivergence of additive recurrent networks with unsaturating piecewise linear transfer functions, also called linear threshold or semilinear transfer functions. As was recently shown by Hahnloser et al. (2000), networks of this type can be efficiently built in silicon and exhibit the coexistence of digital selection and analogue amplification in a single circuit. To obtain this behaviour, the network must be multistable and nondivergent and our conditions allow to determine the regimes where this can be achieved with maximal recurrent amplification. The first condition can be applied to nonsymmetric networks and has a simple interpretation of requiring that the strength of local inhibition must match the sum over excitatory weights converging onto a neuron. The second condition is restricted to symmetric networks, but can also take into account the stabilizing effect of nonlocal inhibitory interactions. We demonstrate the application of the conditions on a simple example and the orientationselectivity model of BenYishai et al. (1995). We show that the conditions can be used to identify in their model regions of maximal orientationselective amplification and symmetry breaking.
A tighter bound for the echo state property
 IEEE Trans. Neural Networks
, 2006
"... This letter provides a brief explanation of echo state networks and provides a rigorous bound for guaranteeing asymptotic stability of these networks. The stability bounds presented here could aid in the design of echo state networks that would be applicable to control applications where stability i ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
This letter provides a brief explanation of echo state networks and provides a rigorous bound for guaranteeing asymptotic stability of these networks. The stability bounds presented here could aid in the design of echo state networks that would be applicable to control applications where stability is required.
Recurrent Learning Of InputOutput Stable Behaviour In Function Space: A Case Study With The Roessler Attractor
 In Proc. ICANN 99
, 1999
"... We analyse the stability of the inputoutput behaviour of a recurrent network. It is trained to implement an operator implicitly given by the chaotic dynamics of the Roessler attractor. Two of the attractors coordinate functions are used as network input and the third defines the reference output. U ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
We analyse the stability of the inputoutput behaviour of a recurrent network. It is trained to implement an operator implicitly given by the chaotic dynamics of the Roessler attractor. Two of the attractors coordinate functions are used as network input and the third defines the reference output. Using recently developed new methods we show that the trained network is inputoutput stable and compute its inputoutput gain. Further we define a stable region in weight space in which weights can freely vary without affecting the inputoutput stability. We show that this region is large enough to allow stability preserving online adaptation which enables the network to cope with parameter drift in the referenced attractor dynamics. 1 Introduction In recent years there is an increasing interest in using neural networks in the fields of control and engineering. As long as feedforward networks are concerned, which can be incrementally adapted to implement static inputoutput maps, this int...
Stability of backpropagationdecorrelation efficient O(N) recurrent learning
 In Proceedings of ESANN’05
, 2005
"... Abstract. We provide a stability analysis based on nonlinear feedback theory for the recently introduced backpropagationdecorrelation (BPDC) recurrent learning algorithm. For one output neuron BPDC adapts only the output weights of a possibly large network and therefore can learn in O(N). We derive ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We provide a stability analysis based on nonlinear feedback theory for the recently introduced backpropagationdecorrelation (BPDC) recurrent learning algorithm. For one output neuron BPDC adapts only the output weights of a possibly large network and therefore can learn in O(N). We derive a simple sufficient stability inequality which can easily be evaluated and monitored online to assure that the recurrent network remains stable while adapting. As byproduct we show that BPDC is highly competitive on the recently introduced CATS benchmark data [1]. 1
Maximisation of Stability Ranges for Recurrent Neural Networks Subject to onLine Adaptation
, 1999
"... . We present conditions for absolute stability of recurrent neural networks with timevarying weights based on the Popov theorem from nonlinear feedback system theory. We show how to maximise the stability bounds by deriving a convex optimisation problem subject to linear matrix inequality constrai ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
. We present conditions for absolute stability of recurrent neural networks with timevarying weights based on the Popov theorem from nonlinear feedback system theory. We show how to maximise the stability bounds by deriving a convex optimisation problem subject to linear matrix inequality constraints, which can efficiently be solved by interior point methods with standard software. 1 Introduction One of the most exciting properties of recurrent neural networks (RNN) is their ability to model the timebehaviour of arbitrary dynamical systems [6]. With a number of schemes available which incrementally adapt a network using timedependent error signals [13] recurrent networks can solve identification and adaptive control tasks in larger systems [8,14]. In such applications the proper functioning of the control system then crucially depends on the the dynamical behaviour of the network. Thus one of the most investigated issues in RNN theory is stability, especially the existence and uni...
Local InputOutput Stability of Recurrent Networks with TimeVarying Weights
, 2000
"... We present local conditions for inputoutput stability of recurrent neural networks with timevarying parameters introduced for instance by noise or online adaptation. The conditions guarantee that a network implements a proper mapping from timevarying input to timevarying output functions using ..."
Abstract
 Add to MetaCart
We present local conditions for inputoutput stability of recurrent neural networks with timevarying parameters introduced for instance by noise or online adaptation. The conditions guarantee that a network implements a proper mapping from timevarying input to timevarying output functions using a local equilibrium as point of operation. We show how to calculate necessary bounds on the allowed inputs to keep the network in the stable range and apply the method to an example of learning an inputoutput map implied by the chaotic Roessler attractor. 1
Distributed by:
"... No part of this publication may be reproduced, stored in a retrieval system, or be transmitted, in any form or by any means, electronic, mechanic, photocopying, recordning, or otherwise, without prior permission of the author. Preface The work presented in this thesis has been carried out at the Div ..."
Abstract
 Add to MetaCart
(Show Context)
No part of this publication may be reproduced, stored in a retrieval system, or be transmitted, in any form or by any means, electronic, mechanic, photocopying, recordning, or otherwise, without prior permission of the author. Preface The work presented in this thesis has been carried out at the Division of Mechanics at Linköpings Universitet, with partial financial support from the Swedish Research Council (VR). I would like to thank my supervisor Prof. Anders Klarbring, without whom there would have been no thesis. My cosupervisors Prof. Matts Karlsson and Prof. Petter Krus should also be acknowledged. Thanks to everyone at the Division of Mechanics for good company. Special thanks to Dr. Jonas St˚alhand, for your company, but also for introducing me to the field of mechanics. I wish to express my gratitude to my parents, Johan and Ulla, and my brothers, Andreas and Fredrik, as well as to my other relatives, and my friends outside the world of mechanics. Last but not least, I thank my lovely wife Eva and my beautiful sons, August and Eric.