Results 11  20
of
159
For Neural Networks, Function Determines Form
, 1992
"... This paper shows that the weights of continuoustime feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function oe; if the two n ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
This paper shows that the weights of continuoustime feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function oe; if the two nets have equal behaviors as "black boxes" then necessarily they must have the same number of neurons and except at most for sign reversals at each node the same weights. Moreover, even if the activations are not a priori known to coincide, they are shown to be also essentially determined from the external measurements. Key words: Neural networks, identification from input/output data, control systems 1 Introduction Many recent papers have explored the computational and dynamical properties of systems of interconnected "neurons." For instance, Hopfield ([7]), Cowan ([4]), and Grossberg and his school (see e.g. [3]), have all studied devices that can be modelled by sets of nonlinear dif...
Closedform Analytic Maps in One and Two Dimensions Can Simulate Turing Machines
, 1996
"... We show closedform analytic functions consisting of a finite number of trigonometric terms can simulate Turing machines, with exponential slowdown in one dimension or in real time in two or more. 1 A part of this author's work was done when he was visiting DIMACS at Rutgers University. 1 Int ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
We show closedform analytic functions consisting of a finite number of trigonometric terms can simulate Turing machines, with exponential slowdown in one dimension or in real time in two or more. 1 A part of this author's work was done when he was visiting DIMACS at Rutgers University. 1 Introduction Various authors have independently shown [9, 12, 4, 14, 1] that finitedimensional piecewiselinear maps and flows can simulate Turing machines. The construction is simple: associate the digits of the x and y coordinates of a point with the left and right halves of a Turing machine's tape. Then we can shift the tape head by halving or doubling x and y, and write on the tape by adding constants to them. Thus two dimensions suffice for a map, or three for a continuoustime flow. These systems can be thought of as billiards or optical ray tracing in three dimensions, recurrent neural networks, or hybrid systems. However, piecewiselinear functions are not very realistic from a physical p...
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
Beyond The Universal Turing Machine
, 1998
"... We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of welldefined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a phi ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of welldefined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
Rule Extraction from Recurrent Neural Networks: a Taxonomy and Review
 Neural Computation
, 2005
"... this paper, the progress of this development is reviewed and analysed in detail. In order to structure the survey and to evaluate the techniques, a taxonomy, specifically designed for this purpose, has been developed. Moreover, important open research issues are identified, that, if addressed pr ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
this paper, the progress of this development is reviewed and analysed in detail. In order to structure the survey and to evaluate the techniques, a taxonomy, specifically designed for this purpose, has been developed. Moreover, important open research issues are identified, that, if addressed properly, possibly can give the field a significant push forward
FirstOrder vs. SecondOrder Single Layer Recurrent Neural Networks
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1994
"... We examine the representational capabilities of firstorder and secondorder Single Layer Recurrent Neural Networks (SLRNNs) with hardlimiting neurons. We show that a secondorder SLRNN is strictly more powerful than a firstorder SLRNN. However, if the firstorder SLRNN is augmented with output lay ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We examine the representational capabilities of firstorder and secondorder Single Layer Recurrent Neural Networks (SLRNNs) with hardlimiting neurons. We show that a secondorder SLRNN is strictly more powerful than a firstorder SLRNN. However, if the firstorder SLRNN is augmented with output layers of feedforward neurons, it can implement any finitestate recognizer, but only if statesplitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of statesplitting allows for efficient implementation of finitestate recognizers using augmented firstorder SLRNNs.
From Linear to Nonlinear: Some Complexity Comparisons
, 1995
"... 95 CDC Keywords: complexity, controllability, nonlinear Extended Summary for Invited Session entitled Computational Complexity Issues in Control 1. Introduction It is obvious that many control problems are in general easier to solve for linear systems than for arbitrary, not necessarily linear, one ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
95 CDC Keywords: complexity, controllability, nonlinear Extended Summary for Invited Session entitled Computational Complexity Issues in Control 1. Introduction It is obvious that many control problems are in general easier to solve for linear systems than for arbitrary, not necessarily linear, ones. An interesting and worthy area of research deals with the attempt to make mathematically precise the increases in difficulty that may arise when passing to the nonlinear case. By obtaining such precise statements, one gains an understanding of which analysis and/or design problems may be expected to be intractable. For instance, even for apparently mildly nonlinear systems it becomes impossible to check if a state ever reaches the origin. More interestingly perhaps, one also can then explain in what sense some variants of problems are easier than others for nonlinear systems. An example of this later aspect is given by comparing the characterization of the accessibility property (being ...
Computational Complexity Of Neural Networks: A Survey
, 1994
"... . We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks fr ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
. We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1 [Computation by Abstract Devices]: Models of Computationneural networks, circuits; F.1.3 [Computation by Abstract Devices ]: Complexity Classescomplexity hierarchies Key words: Neural networks, computational complexity, threshold circuits, associative memory 1. Introduction The currently again very active field of computation by "neural" networks has opened up a wealth of fascinating research topics in the computational complexity analysis of the models considered. While much of the general appeal of the field stems not so much from new computational possibilities, but from the possibility of "learning", or synthesizing networks...
VapnikChervonenkis Dimension of Recurrent Neural Networks
, 1997
"... Most of the work on the VapnikChervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimensi ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Most of the work on the VapnikChervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, polynomial, piecewisepolynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only. An important difference between recurrent and feedforward nets is that a fixed recurrent net can receive inputs of arbitrary length. Therefore we are particularly interested in the case k AE w. Ignoring multiplicative constants, the main results say roughly the following: ffl For architectures with activation oe = a...
On the Complexity of Training Neural Networks with Continuous Activation Functions
, 1993
"... We deal with computational issues of loading a fixedarchitecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading networks which do not consist of the binarythreshold neurons, but rather utilize a particular continuous activation func ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
We deal with computational issues of loading a fixedarchitecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading networks which do not consist of the binarythreshold neurons, but rather utilize a particular continuous activation function, commonly used in the neural network literature. We observe that the loading problem is polynomialtime if the input dimension is constant. Otherwise, however, any possible learning algorithm based on particular fixed architectures faces severe computational barriers. Similar theorems have already been proved by Megiddo and by Blum and Rivest, to the case of binarythreshold networks only. Our theoretical results lend further justification to the use of incremental (architecturechanging) techniques for training networks rather than fixed architectures. Furthermore, they imply hardness of learnability in the probablyapproximatelycorrect sense as well.