Results 1  10
of
23
Neural networks for control
 in Essays on Control: Perspectives in the Theory and its Applications (H.L. Trentelman and
, 1993
"... This paper starts by placing neural net techniques in a general nonlinear control framework. After that, several basic theoretical results on networks are surveyed. 1 ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
This paper starts by placing neural net techniques in a general nonlinear control framework. After that, several basic theoretical results on networks are surveyed. 1
Uniqueness Of Weights For Neural Networks
 in Artificial Neural Networks with Applications in Speech and Vision
, 1993
"... Introduction In most applications dealing with learning and pattern recognition, neural nets are employed as models whose parameters, or "weights," must be fit to training data. Gradient descent and other algorithms are used in order to minimize an error functional, which penalizes mismatches betwe ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Introduction In most applications dealing with learning and pattern recognition, neural nets are employed as models whose parameters, or "weights," must be fit to training data. Gradient descent and other algorithms are used in order to minimize an error functional, which penalizes mismatches between the desired outputs and those that a candidate net with a fixed architecture and varying weights produces. There are many numerical issues that arise naturally when using such a design approach, in particular: (i) the possibility of local minima which are not globally optimal, and (ii) the possibility of multiple global minimizers. The first question was dealt with by many different authors see for instance [5, 13, 14] and will not reviewed here. Regarding point (ii), observe that there are obvious transformations that leave the behavior of a network invariant, such as interchanges of all incoming and outgoing weights between two neurons, that is the relabeling of neu
Neural Nets As Systems Models And Controllers
, 1992
"... This paper briefly surveys some recent results relevant to the suitability of "neural nets" as models for dynamical systems as well as controllers for nonlinear plants. In particular, it touches upon questions of approximation, identifiability, construction of feedback laws, classification and inter ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
This paper briefly surveys some recent results relevant to the suitability of "neural nets" as models for dynamical systems as well as controllers for nonlinear plants. In particular, it touches upon questions of approximation, identifiability, construction of feedback laws, classification and interpolation, and computational capabilities of nets. No discussion is included of "learning" algorithms, concentrating instead on representational issues. 1. Introduction The basic paradigm for control is that of a "plant" or physical device P interconnected with a controller C. The controller uses measurements from P in order P C Figure 1: Basic Paradigm to compute signals, which are then fed back into the plant so as to attain a given regulation objective. (This description can be extended to incorporate the effect of external disturbances, the specification of desired trajectories, and so forth.) The plant P represents an existing system, and it is essential to have a mathematical model ...
State Observability In Recurrent Neural Networks
 Systems & Control Letters
, 1993
"... We obtain a characterization of observability for a class of nonlinear systems which appear in neural networks research. Keywords: Recurrent neural networks, observability. This research was supported in part by US Air Force Grant AFOSR910346, and also by an INDAM (Istituto Nazionale di Alta Matem ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
We obtain a characterization of observability for a class of nonlinear systems which appear in neural networks research. Keywords: Recurrent neural networks, observability. This research was supported in part by US Air Force Grant AFOSR910346, and also by an INDAM (Istituto Nazionale di Alta Matematica Francesco Severi, Italy) fellowship. Rutgers Center for Systems and Control December 1992, rev May 1993 3 Also: Universita' di Padova, Dipartimento di Matematica, Via Belzoni 7, 35100 Padova, Italy. STATE OBSERVABILITY IN RECURRENT NEURAL NETWORKS y Francesca Albertini z Eduardo D. Sontag Department of Mathematics Rutgers University, New Brunswick, NJ 08903 Email: albertin@hilbert.rutgers.edu, sontag@hilbert.rutgers.edu Key words: Recurrent neural networks, observability ABSTRACT We obtain a characterization of observability for a class of nonlinear systems which appear in neural networks research. 1 Introduction Systems consisting of a large number of interconnected "neurons...
Recurrent Neural Networks: Some SystemsTheoretic Aspects
 Dealing with Complexity: a Neural Network Approach
, 1997
"... This paper provides an exposition of some recent research regarding systemtheoretic aspects of continuoustime recurrent (dynamic) neural networks with sigmoidal activation functions. The class of systems is introduced and discussed, and a result is cited regarding their universal approximation pro ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
This paper provides an exposition of some recent research regarding systemtheoretic aspects of continuoustime recurrent (dynamic) neural networks with sigmoidal activation functions. The class of systems is introduced and discussed, and a result is cited regarding their universal approximation properties. Known characterizations of controllability, observability, and parameter identifiability are reviewed, as well as a result on minimality. Facts regarding the computational power of recurrent nets are also mentioned. Supported in part by US Air Force Grant AFOSR940293 1 Introduction Recurrent nets have been introduced in control, computation, signal processing, optimization, and associate memory applications. Given matrices A 2 R n\Thetan , B 2 R n\Thetam , C 2 R p\Thetan , as well as a fixed Lipschitz scalar function oe : R! R, the continuous time recurrent network \Sigma with activation function oe and weight matrices (A; B; C) is given by: dx dt (t) = ~oe (n) (A...
Complete Controllability of ContinuousTime Recurrent Neural Networks
 Systems and Control Letters
, 1997
"... This paper presents a characterization of controllability for the class of control systems commonly called (continuoustime) recurrent neural networks. The characterization involves a simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent. 1 Introd ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
This paper presents a characterization of controllability for the class of control systems commonly called (continuoustime) recurrent neural networks. The characterization involves a simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent. 1 Introduction This paper continues the study of systemtheoretic properties of recurrent networks. Assume given a locally Lipschitz map oe : R! R. By an ndimensional, minput (recurrent) oenet we mean a continuoustime control system of the form x(t) = ~oe (n) (Ax(t) + Bu(t)) ; (1) where A 2 R n\Thetan and B 2 R n\Thetam . Here, for each map oe : R! R and each positive integer n, we use ~oe (n) to denote the diagonal mapping ~oe (n) : R n ! R n : 0 B @ x 1 . . . x n 1 C A 7! 0 B @ oe(x 1 ) . . . oe(x n ) 1 C A : (2) (Sometimes one includes, in addition, an observation or measurement function y = Cx, but this paper will not deal with observation issues.) The spaces R m a...
Some Topics in Neural Networks and Control
, 1993
"... This report constitutes an expanded version of a presentation given by the author at the 1993 European Control Conference (short course on "Neural Nets for Control"). The first part places neurocontrol techniques in a general learning control framework. The second part of the report, which is essent ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This report constitutes an expanded version of a presentation given by the author at the 1993 European Control Conference (short course on "Neural Nets for Control"). The first part places neurocontrol techniques in a general learning control framework. The second part of the report, which is essentially independent of the first, briefly surveys several basic theoretical results regarding neural networks.
A Learning Result for ContinuousTime Recurrent Neural Networks
 Systems and Control Letters
, 1998
"... The following learning problem is considered, for continuoustime recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to ap ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The following learning problem is considered, for continuoustime recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to approximate the observed behavior. It is shown that the number of inputs needed for reliable generalization (the sample complexity of the learning problem) is upper bounded by an expression that grows polynomially with the dimension of the network and logarithmically with the number of output derivatives being matched. 1 Introduction This paper is concerned with systems defined by equations of the following type: x(t) = ~oe (n) (Ax(t) +Bu(t)) ; y(t) = Cx(t) ; (1) where A 2 R n\Thetan , B 2 R n\Thetam , C 2 R p\Thetan , and ~oe (n) : R n ! R n is the diagonal map ~oe (n) : 0 B @ x 1 . . . x n 1 C A 7! 0 B @ oe(x 1 ) . . . oe(x n ) 1 C A ; (2) and oe : R ! R is a Lipsc...
Machine Learning Applied to the Control of Complex Systems
 In 3 rd International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale
, 1996
"... The aim of this tutorial is to present the necessary interactions between machine learning and control theory. First we recall the basic definitions of control theory and machine learning. After a brief review of the various approaches in machine learning  distinguishing between supervised and ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The aim of this tutorial is to present the necessary interactions between machine learning and control theory. First we recall the basic definitions of control theory and machine learning. After a brief review of the various approaches in machine learning  distinguishing between supervised and unsupervised learning , we discuss the major methods used in intelligent control. Then we expose another approach based on qualitative physics and rulebased incremental control, and describe its application to several problems ranging from toy problems (temperature control of an aircooled room, position control of an inverted pendulum) to realworld problems (a mobile robot evolving within a cluttered environment). Keywords: intelligent control  rulebased control  machine learning  robotics. 1 Introduction Robotics deal with more and more complex systems evolving in changing or unknown environments. Therefore conventional methods based on a complete knowledge of both the sys...
Homotopy Approaches For The Analysis And Solution Of Neural Network And Other Nonlinear Systems Of Equations
, 1995
"... Increasingly models, mappings, systems and algorithms used for signal processing need to be nonlinear in order to meet performance specifications in communications, computing and control systems applications. Simple computational models have been developed, including neural networks, which can effic ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Increasingly models, mappings, systems and algorithms used for signal processing need to be nonlinear in order to meet performance specifications in communications, computing and control systems applications. Simple computational models have been developed, including neural networks, which can efficiently implement a variety of nonlinear mappings through appropriate choice of model parameters. However, the design of arbitrary nonlinear mappings using these models and measured data requires both understanding how realizable (finite) systems perform if optimized given finite data, and a method for computing globally optimal system parameters. In this thesis, we use constructive homotopy methods both to geometrically explore the mapping capabilities of finite neural networks, and to rigorously develop a robust method for computing optimal solutions to systems of nonlinear equations which, like neural network equations, have an unknown number of solutionsand may have solutions at infinity.