Results 1  10
of
11
Connectionist Learning Procedures
 ARTIFICIAL INTELLIGENCE
, 1989
"... A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way ..."
Abstract

Cited by 338 (6 self)
 Add to MetaCart
A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradientdescent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the performance of the network. The strength is then adjusted in the direction that decreases the error. These relatively simple, gradientdescent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks.
Optimal Unsupervised Learning in a SingleLayer Linear Feedforward Neural Network
, 1989
"... A new approach to unsupervised learning in a singlelayer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achie ..."
Abstract

Cited by 218 (0 self)
 Add to MetaCart
A new approach to unsupervised learning in a singlelayer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achieves the desired optimality is presented, The algorithm finds the eigenvectors of the input correlation matrix, and it is proven to converge with probability one. An implementation which can train neural networks using only local "synaptic" modification rules is described. It is shown that the algorithm is closely related to algorithms in statistics (Factor Analysis and Principal Components Analysis) and neural networks (Selfsupervised Backpropagation, or the "encoder" problem). It thus provides an explanation of certain neural network behavior in terms of classical statistical techniques. Examples of the use of a linear network for solving image coding and texture segmentation problems are presented. Also, it is shown that the algorithm can be used to find "visual receptive fields" which are qualitatively similar to those found in primate retina and visual cortex.
Task Decomposition Through Competition in a Modular Connectionist Architecture
 COGNITIVE SCIENCE
, 1990
"... A novel modular connectionist architecture is presented in which the networks composing the architecture compete to learn the training patterns. As a result of the competition, different networks learn different training patterns and, thus, learn to compute different functions. The architecture pe ..."
Abstract

Cited by 180 (5 self)
 Add to MetaCart
A novel modular connectionist architecture is presented in which the networks composing the architecture compete to learn the training patterns. As a result of the competition, different networks learn different training patterns and, thus, learn to compute different functions. The architecture performs task decomposition in the sense that it learns to partition a task into two or more functionally independent vii tasks and allocates distinct networks to learn each task. In addition, the architecture tends to allocate to each task the network whose topology is most appropriate to that task, and tends to allocate the same network to similar tasks and distinct networks to dissimilar tasks. Furthermore, it can be easily modified so as to...
Natural Language Processing with Modular PDP Networks and Distributed Lexicon
 Cognitive Science
, 1991
"... An approach to connectionist natural language processing is proposed, which is based on hierarchically organized modular Parallel Distributed Processing (PDP) networks and a central lexicon of distributed input/output representations. The modules communicate using these representations, which are gl ..."
Abstract

Cited by 83 (13 self)
 Add to MetaCart
An approach to connectionist natural language processing is proposed, which is based on hierarchically organized modular Parallel Distributed Processing (PDP) networks and a central lexicon of distributed input/output representations. The modules communicate using these representations, which are global and publicly available in the system. The representations are developed automatically by all networks while they are learning their processing tasks. The resulting representations reflect the regularities in the subtasks, which facilitates robust processing in the face of noise and damage, supports improved generalization, and provides expectations about possible contexts. The lexicon can be extended by cloning new instances of the items, that is, by generating a number of items with known processing properties and distinct identities. This technique combinatorially increases the processing power of the system. The recurrent FGREP module, together with a central lexicon, is used as a ba...
Modular Neural Networks: a state of the art
, 1995
"... The use of "global neural networks" (as the back propagation neural network) and "clustering neural networks" (as the radial basis function neural network) leads each other to different advantages and inconvenients. The combination of the desirable features ot those two neural ways of computation is ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
The use of "global neural networks" (as the back propagation neural network) and "clustering neural networks" (as the radial basis function neural network) leads each other to different advantages and inconvenients. The combination of the desirable features ot those two neural ways of computation is achieved by the use of Modular Neural Networks (MNN). In addition, a considerable advantage can emerge from the use of such a MNN: an interpreatable and relevant neural representation about the plant's behaviour. This very desirable feature for function approximation and especially for control problems, is what lake other neural models. This feature is so important that we introduce it as a way to differenciate MNN between other local computation models. However, to enable a systematic use of MNN three steps have to be achieved. First of all, the task has to be decomposed into subtasks, then the neural modules have to be properly organised considering the subtasks and finally a way of commu...
GAMLS: A Generalized framework for Associative Modular Learning Systems
 In Proceedings of the Applications and Science of Computational Intelligence II
, 1999
"... Learning a large number of simple local concepts is both faster and easier than learning a single global concept. Inspired by this principle of divide and conquer, a number of modular learning approaches have been proposed by the computational intelligence community. In modular learning, the classif ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
Learning a large number of simple local concepts is both faster and easier than learning a single global concept. Inspired by this principle of divide and conquer, a number of modular learning approaches have been proposed by the computational intelligence community. In modular learning, the classification/regression/clustering problem is first decomposed into a number of simpler subproblems, a module is learned for each of these subproblems, and finally their results are integrated by a suitable combining method. Mixtures of experts and clustering are two of the techniques that are describable in this paradigm. In this paper we present a broad framework for Generalized Associative Modular Learning Systems (GAMLS). Modularity is introduced through soft association of each training pattern with every module. The coupled problems of learning the module parameters and learning associations are solved iteratively using deterministic annealing. Starting at a high temperature with only one modu...
Cooperative Modular Neural Network Classifiers
 Neurocomputing J
, 1996
"... The current generation of nonmodular neural network classifiers is unable to cope with classification problems which have a wide range of overlap among classes. This is due to the high coupling among the networks' hidden nodes. Modular neural network structures attempt to reduce this limitation via ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The current generation of nonmodular neural network classifiers is unable to cope with classification problems which have a wide range of overlap among classes. This is due to the high coupling among the networks' hidden nodes. Modular neural network structures attempt to reduce this limitation via a divideandconquer approach. However, current modular designs are not offering a general alternative to the nonmodular approach because they do not provide a reasonable balance between subtasks simplification, and decisionmaking efficiency. While the taskdecomposition algorithm attempts to produce subtasks as simple as they can be, the modules are expected to give the multimodule decisionmaking strategy enough information to take an accurate global decision.
A KnowledgeBased Approach to MLP Configuration
 In International Conference on Neural Networks
, 1996
"... We propose a knowledgebased approach to the task of determining the topology of multilayer perceptrons (MLPs). The idea consists in integrating wellfounded and empirically proven configuration techniques into a knowledgebased system. A preliminary study showed that the use of these techniques dep ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We propose a knowledgebased approach to the task of determining the topology of multilayer perceptrons (MLPs). The idea consists in integrating wellfounded and empirically proven configuration techniques into a knowledgebased system. A preliminary study showed that the use of these techniques depends primarily on the amount of prior domain knowledge available: configuration techniques can be situated along a spectrum going from knowledgeintensive techniques like symbolic structure compilation, through partial techniques using pieces of domain knowledge like hints, to knowledgelean techniques based mainly on heuristic search. Our knowledge base for MLP configuration formalizes the conditions of applicability of these techniques as well as interdependencies and complementaries among them which might lead to novel hybrid configuration strategies. 1. Introduction Neural network (NN) research has generated a plethora of architectures and algorithms, and it is becoming increasingly dif...
A Neuralnetwork to get Correlated Informations among Multiple Inputs, submitting IJCNN'93
, 1993
"... Humans can obtain useful information from sensor inputs and motion signals. The author takes a stand on the importance of the information which commonly exists among multiple inputs, which is called correlated information here, especially among sensor and motion signals. First, a method to get corre ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Humans can obtain useful information from sensor inputs and motion signals. The author takes a stand on the importance of the information which commonly exists among multiple inputs, which is called correlated information here, especially among sensor and motion signals. First, a method to get correlated information is proposed. In this method we enter multiple inputs to plural neuralnetworks respectively and make each output of each network communicate with that of the other network. Basic experiments were examined and it was confirmed that the correlated information can be extracted. 1.