• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

The upstart algorithm: A method for constructing and training feedforward neural networks (1990)

by M Frean
Venue:Neural Computa
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 192
Next 10 →

Evolving Artificial Neural Networks

by Xin Yao , 1999
"... This paper: 1) reviews different combinations between ANN's and evolutionary algorithms (EA's), including using EA's to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA's; ..."
Abstract - Cited by 566 (6 self) - Add to MetaCart
This paper: 1) reviews different combinations between ANN's and evolutionary algorithms (EA's), including using EA's to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA's; and 3) points out possible future research directions. It is shown, through a considerably large literature review, that combinations between ANN's and EA's can lead to significantly better intelligent systems than relying on ANN's or EA's alone

An Evolutionary Algorithm that Constructs Recurrent Neural Networks

by Peter J. Angeline, Gregory M. Saunders, Jordan B. Pollack - IEEE TRANSACTIONS ON NEURAL NETWORKS
"... Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation, whi ..."
Abstract - Cited by 261 (14 self) - Add to MetaCart
Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation, which includes genetic algorithms and evolutionary programming, is a population-based search method that has shown promise in such complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. This algorithm’s empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.

Constructive Incremental Learning from Only Local Information

by Stefan Schaal, Christopher G. Atkeson , 1998
"... ... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields. ..."
Abstract - Cited by 206 (39 self) - Add to MetaCart
... This article illustrates the potential learning capabilities of purely local learning and offers an interesting and powerful approach to learning with receptive fields.

A Review of Evolutionary Artificial Neural Networks

by Xin Yao , 1993
"... Research on potential interactions between connectionist learning systems, i.e., artificial neural networks (ANNs), and evolutionary search procedures, like genetic algorithms (GAs), has attracted a lot of attention recently. Evolutionary ANNs (EANNs) can be considered as the combination of ANNs and ..."
Abstract - Cited by 200 (23 self) - Add to MetaCart
Research on potential interactions between connectionist learning systems, i.e., artificial neural networks (ANNs), and evolutionary search procedures, like genetic algorithms (GAs), has attracted a lot of attention recently. Evolutionary ANNs (EANNs) can be considered as the combination of ANNs and evolutionary search procedures. This paper first distinguishes among three kinds of evolution in EANNs, i.e., the evolution of connection weights, of architectures and of learning rules. Then it reviews each kind of evolution in detail and analyses critical issues related to different evolutions. The review shows that although a lot of work has been done on the evolution of connection weights and of architectures, few attempts have been made to understand the evolution of learning rules. Interactions among different evolutions are seldom mentioned in current research. However, the evolution of learning rules and its interactions with other kinds of evolution play a vital role in EANNs. As t...

The neural basis of cognitive development: A constructivist manifesto

by Steven R. Quartz, Terrence J. Sejnowski, Howard Hughes - Behavioral and Brain Sciences , 1997
"... Quartz, S. & Sejnowski, T.J. (1997). The neural basis of cognitive development: A constructivist manifesto. ..."
Abstract - Cited by 188 (2 self) - Add to MetaCart
Quartz, S. & Sejnowski, T.J. (1997). The neural basis of cognitive development: A constructivist manifesto.

The Complexity and Approximability of Finding Maximum Feasible Subsystems of Linear Relations

by Edoardo Amaldi, Viggo Kann - Theoretical Computer Science , 1993
"... We study the combinatorial problem which consists, given a system of linear relations, of finding a maximum feasible subsystem, that is a solution satisfying as many relations as possible. The computational complexity of this general problem, named Max FLS, is investigated for the four types of rela ..."
Abstract - Cited by 92 (11 self) - Add to MetaCart
We study the combinatorial problem which consists, given a system of linear relations, of finding a maximum feasible subsystem, that is a solution satisfying as many relations as possible. The computational complexity of this general problem, named Max FLS, is investigated for the four types of relations =, , ? and 6=. Various constrained versions of Max FLS, where a subset of relations must be satisfied or where the variables take bounded discrete values, are also considered. We establish the complexity of solving these problems optimally and, whenever they are intractable, we determine their degree of approximability. Max FLS with =, or ? relations is NP-hard even when restricted to homogeneous systems with bipolar coefficients, whereas it can be solved in polynomial time for 6= relations with real coefficients. The various NP-hard versions of Max FLS belong to different approximability classes depending on the type of relations and the additional constraints. We show that the ran...

Constructive Algorithms for Structure Learning in Feedforward Neural Networks for Regression Problems

by Tin-yau Kwok, Dit-Yan Yeung - IEEE Transactions on Neural Networks , 1997
"... In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole ..."
Abstract - Cited by 87 (2 self) - Add to MetaCart
In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole problem as a state space search, we first describe the general issues in constructive algorithms, with special emphasis on the search strategy. A taxonomy, based on the differences in the state transition mapping, the training algorithm and the network architecture, is then presented. Keywords--- Constructive algorithm, structure learning, state space search, dynamic node creation, projection pursuit regression, cascade-correlation, resource-allocating network, group method of data handling. I. Introduction A. Problems with Fixed Size Networks I N recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among...

Extracting Comprehensible Models from Trained Neural Networks

by W. Craven , 1996
"... To Mom, Dad, and Susan, for their support and encouragement. ..."
Abstract - Cited by 83 (3 self) - Add to MetaCart
To Mom, Dad, and Susan, for their support and encouragement.
(Show Context)

Citation Context

... themselves, but instead must be used in conjunction with ordinary learning methods. Several constructive neural-network approaches have been previously developed (Ash, 1989; Fahlman & Lebiere, 1989; =-=Frean, 1990-=-). Similarly, there are several algorithms that simplify learned networks by removing weights or hidden units (Le Cun et al., 1989; Mozer & Smolensky, 1988). Unlike BBP, however, these methods are not...

The Design and Evolution of Modular Neural Network Architectures

by Bart L.M. Happel, Jacob M. J. Murre - Neural Networks , 1994
"... To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in rec ..."
Abstract - Cited by 64 (0 self) - Add to MetaCart
To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist out of several modules, standard subnetworks that serve as higher-order units with a distinct structure and function. The simulations rely on a particular network module called CALM (Murre, Phaf, and Wolters, 1989, 1992). This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, compu...

Constructive Neural Network Learning Algorithms for Pattern Classification

by Rajesh Parekh, Jihoon Yang, Vasant Honavar , 2000
"... Constructive learning algorithms offer an attractive approach for the incremental construction of near-minimal neural-network architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable we ..."
Abstract - Cited by 58 (15 self) - Add to MetaCart
Constructive learning algorithms offer an attractive approach for the incremental construction of near-minimal neural-network architectures for pattern classification. They help overcome the need for ad hoc and often inappropriate choices of network topology in algorithms that search for suitable weights in a priori fixed network architectures. Several such algorithms are proposed in the literature and shown to converge to zero classification errors (under certain assumptions) on tasks that involve learning a binary to binary mapping (i.e., classification problems involving binary-valued input attributes and two output categories). We present two constructive learning algorithms MPyramid-real and MTiling-real that extend the pyramid and tiling algorithms, respectively, for learning real to M-ary mappings (i.e., classification problems involving real-valued input attributes and multiple output classes). We prove the convergence of these algorithms and empirically demonstrate their applicability to practical pattern classification problems. Additionally, we show how the incorporation of a local pruning step can eliminate several redundant neurons from MTiling-real networks.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2018 The Pennsylvania State University