Results 1  10
of
64
Gradient calculation for dynamic recurrent neural networks: a survey
 IEEE Transactions on Neural Networks
, 1995
"... Abstract  We survey learning algorithms for recurrent neural networks with hidden units, and put the various techniques into a common framework. We discuss xedpoint learning algorithms, namely recurrent backpropagation and deterministic Boltzmann Machines, and non xedpoint algorithms, namely backp ..."
Abstract

Cited by 135 (3 self)
 Add to MetaCart
Abstract  We survey learning algorithms for recurrent neural networks with hidden units, and put the various techniques into a common framework. We discuss xedpoint learning algorithms, namely recurrent backpropagation and deterministic Boltzmann Machines, and non xedpoint algorithms, namely backpropagation through time, Elman's history cuto, and Jordan's output feedback architecture. Forward propagation, an online technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the uni ed presentation leads to generalizations of various sorts. We discuss advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones, continue with some \tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. We present somesimulations, and at the end, address issues of computational complexity and learning speed.
A Survey of Computational Complexity Results in Systems and Control
, 2000
"... The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fi ..."
Abstract

Cited by 116 (21 self)
 Add to MetaCart
The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of undecidability, polynomial time algorithms, NPcompleteness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, timevarying linear systems, nonlinear and hybrid systems, and stochastic optimal control.
The Dynamical Hypothesis in Cognitive Science
 Behavioral and Brain Sciences
, 1997
"... The dynamical hypothesis is the claim that cognitive agents are dynamical systems. It stands opposed to the dominant computational hypothesis, the claim that cognitive agents are digital computers. This target article articulates the dynamical hypothesis and defends it as an open empirical alternati ..."
Abstract

Cited by 109 (1 self)
 Add to MetaCart
The dynamical hypothesis is the claim that cognitive agents are dynamical systems. It stands opposed to the dominant computational hypothesis, the claim that cognitive agents are digital computers. This target article articulates the dynamical hypothesis and defends it as an open empirical alternative to the computational hypothesis. Carrying out these objectives requires extensive clarification of the conceptual terrain, with particular focus on the relation of dynamical systems to computers. Key words cognition, systems, dynamical systems, computers, computational systems, computability, modeling, time. Long Abstract The heart of the dominant computational approach in cognitive science is the hypothesis that cognitive agents are digital computers; the heart of the alternative dynamical approach is the hypothesis that cognitive agents are dynamical systems. This target article attempts to articulate the dynamical hypothesis and to defend it as an empirical alternative to the compu...
Realtime neuroevolution in the nero video game
 IEEE Transactions on Evolutionary Computation
, 2005
"... In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This pap ..."
Abstract

Cited by 76 (29 self)
 Add to MetaCart
In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This paper introduces the realtime NeuroEvolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. To demonstrate this concept, the NeuroEvolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players ’ teams. This paper describes results from this novel application of machine learning, and demonstrates that rtNEAT makes possible video games like NERO where agents evolve and adapt in real time. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games. 1
Dynamical Recognizers: Realtime Language Recognition by Analog Computers
 Theoretical Computer Science
, 1996
"... We consider a model of analog computation which can recognize various languages in real time. We encode an input word as a point in R d by composing iterated maps, and then apply inequalities to the resulting point to test for membership in the language. Each class of maps and inequalities, suc ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
We consider a model of analog computation which can recognize various languages in real time. We encode an input word as a point in R d by composing iterated maps, and then apply inequalities to the resulting point to test for membership in the language. Each class of maps and inequalities, such as quadratic functions with rational coefficients, is capable of recognizing a particular class of languages; for instance, linear and quadratic maps can have both stacklike and queuelike memories. We use methods equivalent to the VapnikChervonenkis dimension to separate some of our classes from each other, e.g. linear maps are less powerful than quadratic or piecewiselinear ones, polynomials are less powerful than elementary (trigonometric and exponential) maps, and deterministic polynomials of each degree are less powerful than their nondeterministic counterparts. Comparing these dynamical classes with various discrete language classes helps illuminate how iterated maps can...
Quantum automata and quantum grammars
 Theoretical Computer Science
"... Abstract. To study quantum computation, it might be helpful to generalize structures from language and automata theory to the quantum case. To that end, we propose quantum versions of finitestate and pushdown automata, and regular and contextfree grammars. We find analogs of several classical the ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
Abstract. To study quantum computation, it might be helpful to generalize structures from language and automata theory to the quantum case. To that end, we propose quantum versions of finitestate and pushdown automata, and regular and contextfree grammars. We find analogs of several classical theorems, including pumping lemmas, closure properties, rational and algebraic generating functions, and Greibach normal form. We also show that there are quantum contextfree languages that are not contextfree. 1
Computational Capabilities of Recurrent NARX Neural Networks
 IEEE Trans. on Systems, Man and Cybernetics
, 1997
"... Abstract—Recently, fully connected recurrent neural networks have been proven to be computationally rich—at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. T ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
Abstract—Recently, fully connected recurrent neural networks have been proven to be computationally rich—at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t) =9(u(t0nu);111;u(t01); u(t);y(t0ny);111;y(t01)) where u(t) and y(t) represent input and output of the network at time t, nu and ny are the input and output order, and the function 9 is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. I.
Hypercomputation: computing more than the Turing machine
, 2002
"... In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which al ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which allows comparisons of the many approaches and results. To this I add several new results and draw out some interesting consequences of hypercomputation for several different disciplines. I begin with a succinct introduction to the classical theory of computation and its place amongst some of the negative results of the 20 th Century. I then explain how the ChurchTuring Thesis is commonly misunderstood and present new theses which better describe the possible limits on computability. Following this, I introduce ten different hypermachines (including three of my own) and discuss in some depth the manners in which they attain their power and the physical plausibility of each method. I then compare the powers of the different models using a device from recursion theory. Finally, I examine the implications of hypercomputation to mathematics, physics, computer science and philosophy. Perhaps the most important of these implications is that the negative mathematical results of Gödel, Turing and Chaitin are each dependent upon the nature of physics. This both weakens these results and provides strong links between mathematics and physics. I conclude that hypercomputation is of serious academic interest within many disciplines, opening new possibilities that were previously ignored because of long held misconceptions about the limits of computation.
Beyond The Universal Turing Machine
, 1998
"... We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of welldefined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a phi ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of welldefined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
A Weak Version of the Blum, Shub & Smale Model
, 1994
"... We propose a weak version of the BlumShubSmale model of computation over the real numbers. In this weak model only a "moderate" usage of multiplications and divisions is allowed. The class of boolean languages recognizable in polynomial time is shown to be the complexity class P/poly. The main ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
We propose a weak version of the BlumShubSmale model of computation over the real numbers. In this weak model only a "moderate" usage of multiplications and divisions is allowed. The class of boolean languages recognizable in polynomial time is shown to be the complexity class P/poly. The main tool is a result on the existence of small rational points in semialgebraic sets which is of independent interest. As an application, we generalize recent results of Siegelmann & Sontag on recurrent neural networks, and of Maass on feedforward nets. A preliminary version of this paper was presented at the 1993 IEEE Symposium on Foundations of Computer Science. Additional results include: \Pi an efficient simulation of orderfree real Turing machines by probabilistic Turing machines in the full BlumShubSmale model; \Pi an efficient simulation of arithmetic circuits over the integers by boolean circuits; \Pi the strict inclusion of the real polynomial hierarchy in weak exponentia...