Results 1 
9 of
9
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
, 1989
"... The exact form of a gradientfollowing learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Abstract

Cited by 413 (4 self)
 Add to MetaCart
The exact form of a gradientfollowing learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms are shown to allow networks having recurrent connections to learn complex tasks requiring the retention of information over time periods having either fixed or indefinite length. 1 Introduction A major problem in connectionist theory is to develop learning algorithms that can tap the full computational power of neural networks. Much progress has been made with feedforward networks, and attention has recently turned to developing algorithms for networks with recurrent connections, wh...
Learning compatibility coefficients for relaxation labeling processes
 IEEE Trans. Pattern Anal. Machine Intell
, 1994
"... AbstractRelaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures that aim at reducing local ambiguities and achieving global consistency through a parallel exploitation o ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
AbstractRelaxation labeling processes have been widely used in many different domains including image processing, pattern recognition, and artificial intelligence. They are iterative procedures that aim at reducing local ambiguities and achieving global consistency through a parallel exploitation of contextual information, which is quantitatively expressed in terms of a set of “compatibility coefficients. ” The problem of determining compatibility coefficients has received a considerable attention in the past and many heuristic, statisticalbased methods have been suggested. In this paper, we propose a rather different viewpoint to solve this problem: we derive them attempting to optimize the performance of the relaxation algorithm over a sample of training data; no statistical interpretation is given: compatibility coefficients are simply interpreted as real numbers, for which performance is optimal. Experimental results over a novel application of relaxation are given, which prove the effectiveness of the proposed approach. Index Terms Compatibility coefficients, constraint satisfaction, gradient projection, learning, neural networks, nonlinear
On Planning And Exploration In NonDiscrete Environments
 Gesellschaft fur Mathematik und Datenverarbeitung, D5205 St
, 1991
"... The application of reinforcement learning to control problems has received considerable attention in the last few years [And86, Bar89, Sut84]. In general there are two principles to solve reinforcement learning problems: direct and indirect techniques, both having their advantages and disadvantag ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
The application of reinforcement learning to control problems has received considerable attention in the last few years [And86, Bar89, Sut84]. In general there are two principles to solve reinforcement learning problems: direct and indirect techniques, both having their advantages and disadvantages. We present a system that combines both methods [TML91, TML90]. By interaction with an unknown environment a world model is progressively constructed using the backpropagation algorithm. For optimizing actions with respect to future reinforcement planning is applied in two steps: An experience network proposes a plan which is subsequently optimized by gradient descent with a chain of model networks. While operating in a goaloriented manner due to the planning process the experience network is trained. Its accumulating experience is fed back into the planning process in form of initial plans, such that planning can be gradually reduced. In order to ensure complete system identif...
A General FeedForward Algorithm for Gradient Descent in Connectionist Networks
, 1990
"... An extended feedforward algorithm for recurrent connectionist networks is presented. This algorithm, which works locally in time, is derived both for discreteintime networks and for continuous networks. Several standard gradient descent algorithms for connectionist networks (e.g. [48], [30], [28] ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
An extended feedforward algorithm for recurrent connectionist networks is presented. This algorithm, which works locally in time, is derived both for discreteintime networks and for continuous networks. Several standard gradient descent algorithms for connectionist networks (e.g. [48], [30], [28] [15], [34]), especially the backpropagation algorithm [36], are mathematically derived as a special case of this general algorithm. The learning algorithm presented in this paper is a superset of gradient descent learning algorithms for multilayer networks, recurrent networks and timedelay networks that allows any combinations of their components. In addition, the paper presents feedforward approximation procedures for initial activations and external input values. The former one is used for optimizing starting values of the socalled context nodes, the latter one turned out to be very useful for finding spurious input attractors of a trained connectionist network. Finally, we compare tim...
Introduction to Neural Networks
, 1998
"... The presented technical report is a preliminary English translation of selected revised sections from the first part of the book Theoretical Issues of Neural Networks [75] by the first author which represents a brief introduction to neural networks. This work does not cover a complete survey of the ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The presented technical report is a preliminary English translation of selected revised sections from the first part of the book Theoretical Issues of Neural Networks [75] by the first author which represents a brief introduction to neural networks. This work does not cover a complete survey of the neural network models but the exposition here is focused more on the original motivations and on the clear technical description of several basic type models. It can be understood as an invitation to a deeper study of this field. Thus, the respective background is prepared for those who have not met this phenomenon yet so that they could appreciate the subsequent theoretical parts of the book. In addition, this can also be profitable for those engineers who want to apply the neural networks in the area of their expertise. The introductory part does not require deeper preliminary knowledge, it contains many pictures and the mathematical formalism is reduced to the lowest degree in the first c...
A reference model approach to stability analysis of neural networks
 IEEE Transactions on Systems, MAN, and Cybernetics PARTB: Cybernetics
, 2003
"... Abstract—In this paper, a novel methodology called a reference model approach to stability analysis of neural networks is proposed. The core of the new approach is to study a neural network model with reference to other related models, so that different modeling approaches can be combinatively used ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—In this paper, a novel methodology called a reference model approach to stability analysis of neural networks is proposed. The core of the new approach is to study a neural network model with reference to other related models, so that different modeling approaches can be combinatively used and powerfully crossfertilized. Focused on two representative neural network modeling approaches (the neuron state modeling approach and the local field modeling approach), we establish a rigorous theoretical basis on the feasibility and efficiency of the reference model approach. The new approach has been used to develop a series of new, generic stability theories for various neural network models. These results have been applied to several typical neural network systems including the Hopfieldtype neural networks, the recurrent backpropagation neural networks, the BSBtype neural networks, the boundconstraints optimization neural networks, and the cellular neural networks. The results obtained unify, sharpen or generalize most of the existing stability assertions, and illustrate the feasibility and power of the new method. Index Terms—Local field neural network model, reference model approach, stability analysis, static neural network model. I.
Dynamic Recurrent Neural Networks: a Dynamical Analysis
 IEEE TRANS. ON SYSTEMS MAN AND CYBERNETICS, PART B
, 1996
"... In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters : the classical weights between the units and the time constants associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters : the classical weights between the units and the time constants associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for modeling and simulating dynamic recurrent neural networks. In order to achieve this, we study the effect of the statistical distribution of the weights and of the time constants on the network dynamics and we make a statistical analysis of the neural transformation. We examine the network power spectra (to draw some conclusions over the frequential behavior of the network) and we compute the stability regions to explore the stability of the model. We show that the network is sensitive to the variations of the mean values of the weights and the time constants (because of the temporal aspects of the learned tasks). Nevertheless, our results highlight the improvements in the network dynamics d...
Apprentissage Dans Les Réseaux Récurrents Pour La Modélisation Mécanique Et étude De Leurs Interactions Avec L'environnement
, 1995
"... hapitre III.3) et enfin tracer les grandes lignes du systme d'apprentissage envisag dans le cadre de l'apprentissage des modles physiques (chapitre III.4). 159 III.1 Agir pour apprendre dans les rseaux connexionnistes III.1.1 Gnralits Un rseau supervis reoit pendant l'apprentissage des coupl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
hapitre III.3) et enfin tracer les grandes lignes du systme d'apprentissage envisag dans le cadre de l'apprentissage des modles physiques (chapitre III.4). 159 III.1 Agir pour apprendre dans les rseaux connexionnistes III.1.1 Gnralits Un rseau supervis reoit pendant l'apprentissage des couples (entre , sortie). Ceuxci sont peuvent provenir de l'environnement de diffrentes manires. Selon les cas, l'apprentissage actif prend diverses formes. Dans un certain nombre de cas, le rseau est directement reli l'environnement#: un gnrateur d'entres envoie des entres alatoires, ou balayant systmatiquement l'environnement, au rseau et l'environnement#: l'environnement renvoie alors une sortie qui est la sortie dsire pour le rseau. L'action pour apprendre consiste faire gnrer les actions par le systme connexionniste, de manire faciliter l'apprentissage (voir figure III.1). Les systme connexionniste agit alors directement vers l'environnement. Les rsea
ARTIFICIAL NEURAL NETWORKS IN HYDROLOGY. I: PRELIMINARY CONCEPTS
"... In this twopart series, the writers investigate the role of artificial neural networks (ANNs) in hydrology. ANNs are gaining popularity, as is evidenced by the increasing number of papers on this topic appearing in hydrology journals, especially over the last decade. In terms of hydrologic applica ..."
Abstract
 Add to MetaCart
In this twopart series, the writers investigate the role of artificial neural networks (ANNs) in hydrology. ANNs are gaining popularity, as is evidenced by the increasing number of papers on this topic appearing in hydrology journals, especially over the last decade. In terms of hydrologic applications, this modeling tool is still in its nascent stages. The practicing hydrologic community is just becoming aware of the potential of ANNs as an alternative modeling tool. This paper is intended to serve as an introduction to ANNs for hydrologists. Apart from descriptions of various aspects of ANNs and some guidelines on their usage, this paper offers a brief comparison of the nature of ANNs and other modeling philosophies in hydrology. A discussion on the strengths and limitations of ANNs brings out the similarities they have with other modeling approaches, such as the physical model.