Results 1  10
of
24
Regression Modeling in BackPropagation and Projection Pursuit Learning
, 1994
"... We studied and compared two types of connectionist learning methods for modelfree regression problems in this paper. One is the popular backpropagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
We studied and compared two types of connectionist learning methods for modelfree regression problems in this paper. One is the popular backpropagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of the data in directions determined from interconnection weights. However, unlike the use of fixed nonlinear activations (usually sigmoidal) for the hidden neurons in BPL, the PPL systematically approximates the unknown nonlinear activations. Moreover, the BPL estimates all the weights simultaneously at each iteration, while the PPL estimates the weights cyclically (neuronbyneuron and layerbylayer) at each iteration. Although the BPL and the PPL have comparable training speed when based on a GaussNewton optimization algorithm, the PPL proves more parsimonious in that the PPL requires a fewer hi...
Design of Neural Network Filters
 Electronics Institute, Technical University of Denmark
, 1993
"... Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive lter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikkerekursive, ..."
Abstract

Cited by 21 (12 self)
 Add to MetaCart
Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive lter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikkerekursive, uline re adaptive model med additiv st j. Formalet er at klarl gge en r kke faser forbundet med design af neural netv rks arkitekturer med henblik pa at udf re forskellige \blackbox " modellerings opgaver sa som: System identi kation, invers modellering og pr diktion af tidsserier. De v senligste bidrag omfatter: Formulering af en neural netv rks baseret kanonisk lter repr sentation, der danner baggrund for udvikling af et arkitektur klassi kationssystem. I hovedsagen drejer det sig om en skelnen mellem globale og lokale modeller. Dette leder til at en r kke kendte neurale netv rks arkitekturer kan klassi ceres, og yderligere abnes der mulighed for udvikling af helt nye strukturer. I denne sammenh ng ndes en gennemgang af en r kke velkendte arkitekturer. I s rdeleshed l gges der v gt pa behandlingen af multilags perceptron neural netv rket.
A novel iron loss reduction technique for distribution transformers based on a combined genetic algorithmneural network approach
 IEEE Trans. Syst., Man, Cybern. C
, 2001
"... Abstract—This paper presents an effective method to reduce the iron losses of wound core distribution transformers based on a combined neural network genetic algorithm approach. The originality of the work presented in this paper is that it tackles the iron loss reduction problem during the transf ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents an effective method to reduce the iron losses of wound core distribution transformers based on a combined neural network genetic algorithm approach. The originality of the work presented in this paper is that it tackles the iron loss reduction problem during the transformer production phase, while previous works were concentrated on the design phase. More specifically, neural networks effectively use measurements taken at the first stages of core construction in order to predict the iron losses of the assembled transformers, while genetic algorithms are used to improve the grouping process of the individual cores by reducing iron losses of assembled transformers. The proposed method has been tested on a transformer manufacturing industry. The results demonstrate the feasibility and practicality of this approach. Significant reduction of transformer iron losses is observed in comparison to the current practice leading to important economic savings for the transformer manufacturer. Index Terms—Core grouping process, decision trees, genetic algorithms, intelligent core loss modeling, iron loss reduction, neural networks. I.
On the regularization of forgetting recursive least square
 IEEE Transactions on Neural Networks
, 1999
"... Abstract — In this paper, the regularization of employing the forgetting recursive least square (FRLS) training technique on feedforward neural networks is studied. We derive our result from the corresponding equations for the expected prediction error and the expected training error. By comparing t ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, the regularization of employing the forgetting recursive least square (FRLS) training technique on feedforward neural networks is studied. We derive our result from the corresponding equations for the expected prediction error and the expected training error. By comparing these error equations with other equations obtained previously from the weight decay method, we have found that the FRLS technique has an effect which is identical to that of using the simple weight decay method. This new finding suggests that the FRLS technique is another online approach for the realization of the weight decay effect. Besides, we have shown that, under certain conditions, both the model complexity and the expected prediction error of the model being trained by the FRLS technique are better than the one trained by the standard RLS method. Index Terms—Feedforward neural network, forgetting recursive least square, model complexity, prediction error, regularization, weight decay. I.
A generalized learning paradigm exploiting the structure of feedforward neural networks
 IEEE Trans. Neural Networks
, 1996
"... In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described. The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps. The first step is the descent of the error functional in the space o ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described. The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps. The first step is the descent of the error functional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimization strategy. In the second step, each linear block is optimized separately by using a Least Squares (LS) criterion. To demonstrate the effectiveness of the new approach, a detailed treatment of a gradient descent in the neuron space is conducted. The main properties of this approach are the higher speed of convergence with respect to methods that employ an ordinary gradient descent in the weight space (Backpropagation, BP), better numerical conditioning and lower computational cost compared to techniques based on the Hessian matrix. The numerical stability is assured by the use of robust LS linear system solvers, operating directly on the input data of each layer. Experimental results obtained in three problems are described, which confirm the effectiveness of the new method.
A Neural Network Training Algorithm Utilizing Multiple Sets of Linear Equations
"... A fast algorithm is presented for the training of multilayer perceptron neural networks, which uses separate error functions for each hidden unit and solves multiple sets of linear equations. The algorithm builds upon two previously described techniques. In each training iteration, output weight opt ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
A fast algorithm is presented for the training of multilayer perceptron neural networks, which uses separate error functions for each hidden unit and solves multiple sets of linear equations. The algorithm builds upon two previously described techniques. In each training iteration, output weight optimization (OWO) solves linear equations to optimize output weights, which are those connecting to output layer net functions. The method of hidden weight optimization (HWO) develops desired hidden unit net signals from delta functions. The resulting hidden unit error functions are minimized with respect to hidden weights, which are those feeding into hidden unit net functions. An algorithm is described for calculating the learning factor for hidden weights. We show that the combined technique, OWOHWO is superior in terms of convergence to standard OWOBP (output weight optimizationbackpropagation) which uses OWO to update output weights and backpropagation to update hidden weights. We also...
On the Kalman filtering method in neural network training and pruning
 IEEE Trans. Neural Networks
, 1999
"... Abstract — In the use of extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems on how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract — In the use of extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems on how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition will be presented with a simple example illustrated. Then based on three assumptions—1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example. Index Terms—Extended Kalman filter, multilayer perceptron, pruning training, weight saliency. I.
Neural network structures and training algorithms for microwave applications
 Int. J. RF Microwave CAE
, 1999
"... ABSTRACT: Neural networks recently gained attention as fast and flexible vehicles to microwave modeling, simulation, and optimization. After learning and abstracting from microwave data, through a process called training, neural network models are used during microwave design to provide instant answ ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
ABSTRACT: Neural networks recently gained attention as fast and flexible vehicles to microwave modeling, simulation, and optimization. After learning and abstracting from microwave data, through a process called training, neural network models are used during microwave design to provide instant answers to the task learned. Appropriate neural network structure and suitable training algorithm are two of the major issues in developing neural network models for microwave applications. Together, they decide amount of training data required, accuracy that could possibly be achieved, and more importantly developmental cost of neural models. A review of the current status of this emerging technology is presented, with emphasis on neural network structures and training algorithms suitable for microwave applications. Present challenges and future directions of the area are discussed.
NonLinear Relevance Feedback: Improving the Performance of ContentBased Retrieval Systems
 of Contentbased Retrieval Systems”, Multimedia and Expo, 2000. ICME 2000. 2000 IEEE International Conference on
, 2000
"... In this paper, a nonlinear relevance feedback mechanism is proposed for increasing the performance and the reliability of contentbased retrieval systems. In particular, the human is considered as part of the retrieval process in an interactive framework, who evaluates the results provided by the s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper, a nonlinear relevance feedback mechanism is proposed for increasing the performance and the reliability of contentbased retrieval systems. In particular, the human is considered as part of the retrieval process in an interactive framework, who evaluates the results provided by the system so that the system automatically updated its performance based on the users' feedback. An adaptively trained neural network architecture is used for implementing the non linear feedback. The weight adaptation is performed in such a way that the network output satisfies the users' selection as much as possible, while simultaneously providing a minimal degradation over all previous data. Experimental results indicates that the proposed method yields better performance compared to linear relevance feedback mechanism.