Results 1  10
of
52
Speed Up Learning and Network Optimization With Extended Back Propagation
, 1992
"... Methods to speed up learning in back propagation and to optimize the network architecture have been recently studied. This paper shows how adaptation of the steepness of the sigmoids during learning treats these two topics in a common framework. The adaptation of the steepness of the sigmoids is obt ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Methods to speed up learning in back propagation and to optimize the network architecture have been recently studied. This paper shows how adaptation of the steepness of the sigmoids during learning treats these two topics in a common framework. The adaptation of the steepness of the sigmoids is obtained by gradient descent. The resulting learning dynamics can be simulated by a standard network with fixed sigmoids and a learning rule whose main component is a gradient descent with adaptive learning parameters. A law linking variation on the weights to variation on the steepness of the sigmoids is discovered. Optimization of units is obtained by introducing a tendency to decay to zero in the steepness values. This decay corresponds to a decay of the sensitivity of the units. Units with low final sensitivity can be removed after a given transformation of the biases of the network. A decreasing initial distribution of the steepness values is suggested to obtain a good compromise between s...
Connectionist theory refinement: Genetically searching the space of network topologies
 Journal of Artificial Intelligence Research
, 1997
"... An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domainspecific knowledge to improve its ability to generalize. Connectionist theoryrefinement systems, which use background knowledge to select a neural ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domainspecific knowledge to improve its ability to generalize. Connectionist theoryrefinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domainspecific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domainspecific knowledge to help create an initial population of knowledgebased neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledgebased networks) to continually search for better network topologies. Experiments on three realworld domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theoryrefinement system, as well as our previous algorithm for growing knowledgebased networks.
An iterative pruning algorithm for feedforward neural networks
 IEEE Trans. Neural. Networks
, 1997
"... Abstract — The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach tackling this problem is commonly known as pruning and consists o ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
(Show Context)
Abstract — The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach tackling this problem is commonly known as pruning and consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the leastsquares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach. Index Terms — Feedforward neural networks, generalization, hidden neurons, iterative methods, leastsquares methods, network pruning, pattern recognition, structure simplification. I.
An Efficient Sequential Learning Algorithm for Growing and Pruning RBF (GAPRBF) Networks
 IEEE TRANS. ON SYSTEM, MAN, AND CYBERNETICS–PART B: CYBERNETICS
, 2004
"... This paper presents a simple sequential growing and pruning algorithm for radial basis function (RBF) networks. The algorithm referred to as growing and pruning (GAP)RBF uses the concept of "Significance" of a neuron and links it to the learning accuracy. "Significance" of a neu ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
This paper presents a simple sequential growing and pruning algorithm for radial basis function (RBF) networks. The algorithm referred to as growing and pruning (GAP)RBF uses the concept of "Significance" of a neuron and links it to the learning accuracy. "Significance" of a neuron is defined as its contribution to the network output averaged over all the input data received so far. Using a piecewiselinear approximation for the Gaussian function, a simple and efficient way of computing this significance has been derived for uniformly distributed input data. In the GAPRBF algorithm, the growing and pruning are based on the significance of the "nearest" neuron. In this paper, the performance of the GAPRBF learning algorithm is compared with other wellknown sequential learning algorithms like RAN, RANEKF, and MRAN on an artificial problem with uniform input distribution and three realworld nonuniform, higher dimensional benchmark problems. The results indicate that the GAPRBF algorithm can provide comparable generalization performance with a considerably reduced network size and training time.
An Anytime Approach To Connectionist Theory Refinement: Refining The Topologies Of KnowledgeBased Neural Networks
, 1995
"... Many scientific and industrial problems can be better understood by learning from samples of the task at hand. For this reason, the machine learning and statistics communities devote considerable research effort on generating inductivelearning algorithms that try to learn the true "concept&quo ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Many scientific and industrial problems can be better understood by learning from samples of the task at hand. For this reason, the machine learning and statistics communities devote considerable research effort on generating inductivelearning algorithms that try to learn the true "concept" of a task from a set of its examples. Often times, however, one has additional resources readily available, but largely unused, that can improve the concept that these learning algorithms generate. These resources include available computer cycles, as well as prior knowledge describing what is currently known about the domain. Effective utilization of available computer time is important since for most domains an expert is willing to wait for weeks, or even months, if a learning system can produce an improved concept. Using prior knowledge is important since it can contain information not present in the current set of training examples. In this thesis, I present three "anytime" approaches to connec...
Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing
, 1996
"... The purpose of this book is to present a collection of papers that represents a broad spectrum of current research in learning methods for natural language processing, and to advance the state of the art in language learning and artificial intelligence. The book should bridge a gap between several a ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
The purpose of this book is to present a collection of papers that represents a broad spectrum of current research in learning methods for natural language processing, and to advance the state of the art in language learning and artificial intelligence. The book should bridge a gap between several areas that are usually discussed separately, including connectionist, statistical, and symbolic methods. In order to bring together new and different language learning approaches, we held a workshop at the International Joint Conference on Artificial Intelligence in Montreal in August 1995. Paper contributions were selected and revised after having been reviewed by at least twomembers of the international program committee as well as additional reviewers. This book contains the revised workshop papers and additional papers by members of the program committee. In particular this book focuses on current issues such as:  How can we apply existing learning methods to language processing?  What new learning methods are needed for language processing and why?  What language knowledge should be learned and why?
Extraction of Rules from Artificial Neural Networks for Nonlinear Regression
, 2002
"... Neural networks have been successfully applied to solve a variety of application problems including classification and function approximation. They are especially useful as function approximators because they do not require prior knowledge of the input data distribution and they have been shown to b ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
Neural networks have been successfully applied to solve a variety of application problems including classification and function approximation. They are especially useful as function approximators because they do not require prior knowledge of the input data distribution and they have been shown to be universal approximators. In many applications, it is desirable to extract knowledge that can explain how the problems are solved by the networks. Most existing approaches have focused on extracting symbolic rules for classification. Few methods have been devised to extract rules from trained neural networks for regression. This article presents an approach for extracting rules from trained neural networks for regression. Each rule in the extracted rule set corresponds to a subregion of the input space and a linear function involving the relevant input attributes of the data approximates the network output for all data samples in this subregion. Extensive experimental results on 32 benchmark data sets demonstrate the effectiveness of the proposed approach in generating accurate regression rules.
Combining Exploratory Projection Pursuit And Projection Pursuit Regression With Application To Neural Networks
 Neural Computation
, 1992
"... We present a novel classification and regression method that combines exploratory projection pursuit (unsupervised training) with projection pursuit regression (supervised training), to yield a new family of cost/complexity penalty terms. Some improved generalization properties are demonstrated on r ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
(Show Context)
We present a novel classification and regression method that combines exploratory projection pursuit (unsupervised training) with projection pursuit regression (supervised training), to yield a new family of cost/complexity penalty terms. Some improved generalization properties are demonstrated on real world problems. 1 Introduction Parameter estimation becomes difficult in highdimensional spaces due to the increasing sparseness of the data. Therefore, when a low dimensional representation is embedded in the data, dimensionality reduction methods become useful. One such method  projection pursuit regression (Friedman and Stuetzle, 1981) (PPR) is capable of performing dimensionality reduction by composition, namely, it constructs an approximation to the desired response function using a composition of lower dimensional smooth functions. These functions depend on low dimensional projections through the data. When the dimensionality of the problem is in the thousands, even projection...
Guidelines for Financial Forecasting with Neural Networks
 Proceedings of International Conference on Neural Information Processing
, 2001
"... Neural networks are good at classification, forecasting and recognition. They are also good candidates of financial forecasting tools. Forecasting is often used in the decision making process. Neural network training is an art. Trading based on neural network outputs, or trading strategy is also an ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Neural networks are good at classification, forecasting and recognition. They are also good candidates of financial forecasting tools. Forecasting is often used in the decision making process. Neural network training is an art. Trading based on neural network outputs, or trading strategy is also an art. We will discuss a sevenstep neural network forecasting model building approach in this article. Pre and post data processing/analysis skills, data sampling, training criteria and model recommendation will also be covered in this article.
JETNET 3.0  A Versatile Artificial Neural Network Package
, 1993
"... this paper quantities written in sansserif denote matrices and quantities written in boldface denote vectors ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
this paper quantities written in sansserif denote matrices and quantities written in boldface denote vectors