Results 11  20
of
206
Finding the Embedding Dimension and Variable Dependences in Time Series
, 1994
"... : We present a general method, the ffitest, which establishes functional dependencies given a sequence of measurements. The approach is based on calculating conditional probabilities from vector component distances. Imposing the requirement of continuity of the underlying function, the obtained va ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
: We present a general method, the ffitest, which establishes functional dependencies given a sequence of measurements. The approach is based on calculating conditional probabilities from vector component distances. Imposing the requirement of continuity of the underlying function, the obtained values of the conditional probabilities carry information on the embedding dimension and variable dependencies. The power of the method is illustrated on synthetic timeseries with different timelag dependencies and noise levels and on the sunspot data. The virtue of the method for preprocessing data in the context of feedforward neural networks is demonstrated. Also, its applicability for tracking residual errors in output units is stressed. 1 pihong@thep.lu.se 2 carsten@thep.lu.se Introduction The behaviour of a dynamical system is often modeled by analyzing a time series record of certain system variables. Using artificial neural networks (ANN) to model such systems has recently attr...
Connectionist theory refinement: Genetically searching the space of network topologies
 Journal of Artificial Intelligence Research
, 1997
"... An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domainspecific knowledge to improve its ability to generalize. Connectionist theoryrefinement systems, which use background knowledge to select a neural ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domainspecific knowledge to improve its ability to generalize. Connectionist theoryrefinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domainspecific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domainspecific knowledge to help create an initial population of knowledgebased neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledgebased networks) to continually search for better network topologies. Experiments on three realworld domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theoryrefinement system, as well as our previous algorithm for growing knowledgebased networks.
New tools in nonlinear modelling and prediction
 Comput. Manag. Sci
, 2004
"... 1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7 ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
1.1 The Gamma test........................... 4 1.1.1 The slope constant A.................... 6 1.1.2 Local versus global...................... 7
Computing Second Derivatives in FeedForward Networks: a Review
 IEEE Transactions on Neural Networks
, 1994
"... . The calculation of second derivatives is required by recent training and analyses techniques of connectionist networks, such as the elimination of superfluous weights, and the estimation of confidence intervals both for weights and network outputs. We here review and develop exact and approximate ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
(Show Context)
. The calculation of second derivatives is required by recent training and analyses techniques of connectionist networks, such as the elimination of superfluous weights, and the estimation of confidence intervals both for weights and network outputs. We here review and develop exact and approximate algorithms for calculating second derivatives. For networks with jwj weights, simply writing the full matrix of second derivatives requires O(jwj 2 ) operations. For networks of radial basis units or sigmoid units, exact calculation of the necessary intermediate terms requires of the order of 2h + 2 backward/forwardpropagation passes where h is the number of hidden units in the network. We also review and compare three approximations (ignoring some components of the second derivative, numerical differentiation, and scoring). Our algorithms apply to arbitrary activation functions, networks, and error functions (for instance, with connections that skip layers, or radial basis functions, or ...
The Maintenance of Uncertainty
 in Control Systems
, 1997
"... It is important to remain uncertain, of observation, model and law. For the Fermi Summer School, Criticisms Requested email : lenny@maths.ox.ac.uk, Contents 1 ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
It is important to remain uncertain, of observation, model and law. For the Fermi Summer School, Criticisms Requested email : lenny@maths.ox.ac.uk, Contents 1
An iterative pruning algorithm for feedforward neural networks
 IEEE Trans. Neural. Networks
, 1997
"... Abstract — The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach tackling this problem is commonly known as pruning and consists o ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
(Show Context)
Abstract — The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach tackling this problem is commonly known as pruning and consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the leastsquares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach. Index Terms — Feedforward neural networks, generalization, hidden neurons, iterative methods, leastsquares methods, network pruning, pattern recognition, structure simplification. I.
Artificial neural network models for forecasting and decision making
 FIG. 10 : THE FORECAST RESULT MADE BY MLP NEURAL NETWORK ALONG THE GLIDE PATH 07RA BETWEEN 28 OCTOBER 2007 02:03 UTC AND 28 OCTOBER 2007 02:59 UTC FIG. 11: THE FORECAST RESULT MADE BY CONN ALONG THE GLIDE PATH 07RA BETWEEN 28 OCTOBER 2007 02:03 UTC AND 28
, 1994
"... Some authors advocate artificial neural networks as a replacement for statistical forecasting and decision models; other authors are concerned that artificial neural networks might be oversold or just a fad. In this paper we review the literature comparing artificial neural networks and statistical ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
Some authors advocate artificial neural networks as a replacement for statistical forecasting and decision models; other authors are concerned that artificial neural networks might be oversold or just a fad. In this paper we review the literature comparing artificial neural networks and statistical models, particularly in regressionbased forecasting, time series forecasting, and decision making. Our intention is to give a balanced assessment of the potential of artificial neural networks for forecasting and decision making models. We survey the literature and summarize several studies we have performed. Overall, the empirical studies find artificial neural networks comparable to their statistical counterparts. We note the need for using the many mathematical proofs underlying artificial neural networks to determine the best conditions for using artificial neural networks in the forecasting and decision making.
Operating regime based process modeling and identification ph.d thesis
"... the Department of Engineering Cybernetics, who has been of great inspiration and support. Thanks. Moreover, I would like to thank Prof. Petros Ioannou at the UniversityofSouthern California for hosting my six month visit at USC. My interactions with him and his students improved my mathematical prec ..."
Abstract

Cited by 33 (12 self)
 Add to MetaCart
the Department of Engineering Cybernetics, who has been of great inspiration and support. Thanks. Moreover, I would like to thank Prof. Petros Ioannou at the UniversityofSouthern California for hosting my six month visit at USC. My interactions with him and his students improved my mathematical precision and resulted in some adaptive control results that are partially reported in this thesis. Two chapters in this thesis are based on manuscripts that are coauthored with Aage V. S rensen at
Accelerated Learning By Active Example Selection
 International Journal of Neural Systems
, 1994
"... Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative a ..."
Abstract

Cited by 33 (10 self)
 Add to MetaCart
(Show Context)
Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative approach in which the learning proceeds on an increasing number of selected training examples, starting with a small training set. We derive a measure of criticality of examples and present an incremental learning algorithm that uses this measure to select a critical subset of given examples for solving the particular task. Our experimental results suggest that the method can significantly improve training speed and generalization performance in many real applications of neural networks. This method can be used in conjunction with other variations of gradient descent algorithms. 1 Introduction One of the most widely used methods for training multilayer feedforward neural networks is the erro...