Results 1 
5 of
5
A General Framework for Adaptive Processing of Data Structures
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1998
"... A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive ..."
Abstract

Cited by 117 (46 self)
 Add to MetaCart
A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden statespace representation. We introduce a graphical formalism for r...
Stable Encoding of FiniteState Machines in DiscreteTime Recurrent Neural Nets with Sigmoid Units
, 1998
"... In recent years, there has been a lot of interest in the use of discretetime recurrent neural nets (DTRNN) to learn finitestate tasks, with interesting results regarding the induction of simple finitestate machines from inputoutput strings. Parallel work has studied the computational power of DT ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
In recent years, there has been a lot of interest in the use of discretetime recurrent neural nets (DTRNN) to learn finitestate tasks, with interesting results regarding the induction of simple finitestate machines from inputoutput strings. Parallel work has studied the computational power of DTRNN in connection with finitestate computation. This paper describes a simple strategy to devise stable encodings of finitestate machines in computationally capable discretetime recurrent neural architectures with sigmoid units, and gives a detailed presentation on how this strategy may be applied to encode a general class of finitestate machines in a variety of commonlyused first and secondorder recurrent neural networks. Unlike previous work that either imposed some restrictions to state values, or used a detailed analysis based on fixedpoint attractors, the present approach applies to any positive, bounded, strictly growing, continuous activation function, and uses simple bounding criteri...
FiniteState Computation in Analog Neural Networks: Steps Towards Biologically Plausible Models?
, 2001
"... Finitestate machines are the most pervasive models of computation, not only in theoretical computer science, but also in all of its applications to reallife problems, and constitute the best characterized computational model. On the other hand, neural networks proposed almost sixty years ag ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Finitestate machines are the most pervasive models of computation, not only in theoretical computer science, but also in all of its applications to reallife problems, and constitute the best characterized computational model. On the other hand, neural networks proposed almost sixty years ago by McCulloch and Pitts as a simplified model of nervous activity in living beings have evolved into a great variety of socalled artificial neural networks. Artificial neural networks have become a very successful tool for modelling and problem solving because of their builtin learning capability, but most of the progress in this field has occurred with models that are very removed from the behaviour of real, i.e., biological neural networks. This paper surveys the work that has established a connection between finitestate machines and (mainly discretetime recurrent) neural networks, and suggests possible ways to construct finitestate models in biologically plausible neural networks.
Efficient encodings of finite automata in discretetime recurrent neural networks
"... A number of researchers have used discretetime recurrent neural nets (DTRNN) to learn finitestate machines (FSM) from samples of input and output strings; trained DTRNN usually show FSM behaviour forstrings up to a certain length, but not beyond; this is usually called instability. authors have sh ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A number of researchers have used discretetime recurrent neural nets (DTRNN) to learn finitestate machines (FSM) from samples of input and output strings; trained DTRNN usually show FSM behaviour forstrings up to a certain length, but not beyond; this is usually called instability. authors have shown that DTRNN may actually behave as FSM for strings of any length and have devised strategies to construct such DTRNN. In these strategies, mstate deterministic FSM are encoded and the number of state units in the DTRNN is Θ(m). This paper shows that more efficient sigmoid DTRNN encodings exist for a subclass of deterministic finite automata (DFA), namely, when the size of an equivalent nondeterministic finite automata (NFA) is smaller, because nstate NFA may directly be encoded in DTRNN with a Θ(n) units.
Asynchronous Translations with Recurrent Neural Nets
"... In recent years, many researchers have explored the relation between discretetime recurrent neural networks (DTRNN) and finitestate machines (FSMs) either by showing their computational equivalence or by training them to perform as finitestate recognizers from examples. Most of this work has focu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In recent years, many researchers have explored the relation between discretetime recurrent neural networks (DTRNN) and finitestate machines (FSMs) either by showing their computational equivalence or by training them to perform as finitestate recognizers from examples. Most of this work has focussed on the simplest class of deterministic state machines, that is deterministic finite automata and Mealy (or Moore) machines. The class of translations these machines can perform is very limited, mainly because these machines output symbols at the same rate as they input symbols, and therefore, the input and the translation have the same length; one may call these translations synchronous. Reallife translations are more complex: word reorderings, deletions, and insertions are common in naturallanguage translations; or, in speechtophoneme conversion, the number of frames corresponding to each phoneme is different and depends on the particular speaker or word. There are, however, simple deterministic, finitestate machines (extensions of Mealy machines) that may perform these classes of "asynchronous " or "timewarped" translations. A simple DTRNN model with input and output control lines inspired on this class of machines is presented and successfully applied to simple asynchronous translation tasks with interesting results regarding generalization. Training of these nets from inputoutput pairs is complicated by the fact that the time alignment between the target output sequence and the input sequence is unknown and has to be learned: we propose a new error function to tackle this problem. This approach to the induction of asynchronous translators is discussed in connection with other approaches. 1