Results 1 
5 of
5
Recurrent Networks for Structured Data  a Unifying Approach and Its Properties
 Cognitive Systems Research
, 2002
"... We consider recurrent neural networks which deal with symbolic formulas, terms, or, generally speaking, treestructured data. Approaches like the recursive autoassociative memory, discretetime recurrent networks, folding networks, tensor construction, holographic reduced representations, and recurs ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We consider recurrent neural networks which deal with symbolic formulas, terms, or, generally speaking, treestructured data. Approaches like the recursive autoassociative memory, discretetime recurrent networks, folding networks, tensor construction, holographic reduced representations, and recursive reduced descriptions fall into this category. They share the basic dynamics of how structured data are processed: the approaches recursively encode symbolic data into a connectionistic representation or decode symbolic data from a connectionistic representation by means of a simple neural function. In this paper, we give an overview of the ability of neural networks with these dynamics to encode and decode treestructured symbolic data. The correlated tasks, approximating and learning mappings where the input domain or the output domain may consist of structured symbolic data, are examined as well.
Mathematical Aspects of Neural Networks
 European Symposium of Artificial Neural Networks 2003
, 2003
"... In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretic ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretical results (as of beginning of 2003) in the respective areas. Thereby, we follow the dichotomy offered by the overall network structure and restrict ourselves to feedforward networks, recurrent networks, and selforganizing neural systems, respectively.
Perspectives on Learning Symbolic Data with Connectionistic Systems
 Adaptivity and Learning
"... This paper deals with the connection of symbolic and subsymbolic systems. It focuses on connectionistic systems processing symbolic data. We examine the capability of learning symbolic data with various neural architectures which constitute partially dynamic approaches: discrete time partially recur ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper deals with the connection of symbolic and subsymbolic systems. It focuses on connectionistic systems processing symbolic data. We examine the capability of learning symbolic data with various neural architectures which constitute partially dynamic approaches: discrete time partially recurrent neural networks as a simple and well established model for processing sequences, and advanced generalizations like holographic reduced representation, recursive autoassociative memory, and folding networks for processing tree structured data. The methods share the basic dynamics, but they differ in the specific training methods. We consider the following questions: Which are the representational capabilities of the architectures from an algorithmic point of view? Which are the representational capabilities from a statistical point of view? Are the architectures learnable in an appropriate sense? Are they efficiently learnable? 1
Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
"... . We rst present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies the standard (logistic) activation function to the weighted sum of n inputs is h ..."
Abstract
 Add to MetaCart
. We rst present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies the standard (logistic) activation function to the weighted sum of n inputs is hard to train. In particular, the problem of nding the weights of such a unit that minimize the relative quadratic training error within 1 or its average (over a training set) within 13=(31n) of its inmum proves to be NPhard. Hence, the wellknown backpropagation learning algorithm appears to be not ecient even for one neuron which has negative consequences in constructive learning. 1 The Complexity of Neural Network Loading Neural networks establish an important class of learning models that are widely applied in practical applications to solving articial intelligence tasks [13]. The most prominent position among successful neural learning heuristics is occupied by the backprop...