Results 1  10
of
22
An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks
"... Abstract—The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract—The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm. Keywords—Adaptive gain variation, backpropagation, activation function, conjugate gradient, search direction. G I.
HardwareFriendly Learning Algorithms for Neural Networks: an Overview
, 1996
"... The hardware implementation of artificial neural networks and their learning algorithms is a fascinating area of research with farreaching applications. However, the mapping from an ideal mathematical model to compact and reliable hardware is far from evident. This paper presents an overview of var ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
The hardware implementation of artificial neural networks and their learning algorithms is a fascinating area of research with farreaching applications. However, the mapping from an ideal mathematical model to compact and reliable hardware is far from evident. This paper presents an overview of various methods that simplify the hardware implementation of neural network models. Adaptations that are proper to specific learning rules or network architectures are discussed. These range from the use of perturbation in multilayer feedforward networks and local learning algorithms to quantization effects in selforganizing feature maps. Moreover, in more general terms, the problems of inaccuracy, limited precision, and robustness are treated.
Multiresolution FIR NeuralNetworkBased Learning Algorithm Applied to Network Traffic Prediction
 IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Review
, 2006
"... Abstract—In this paper, a multiresolution finiteimpulseresponse (FIR) neuralnetworkbased learning algorithm using the maximal overlap discrete wavelet transform (MODWT) is proposed. The multiresolution learning algorithm employs the analysis framework of wavelet theory, which decomposes a signal ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, a multiresolution finiteimpulseresponse (FIR) neuralnetworkbased learning algorithm using the maximal overlap discrete wavelet transform (MODWT) is proposed. The multiresolution learning algorithm employs the analysis framework of wavelet theory, which decomposes a signal into wavelet coefficients and scaling coefficients. The translationinvariant property of the MODWT allows aligment of events in a multiresolution analysis with respect to the original time series and, therefore, preserving the integrity of some transient events. A learning algorithm is also derived for adapting the gain of the activation functions at each level of resolution. The proposed multiresolution FIR neuralnetworkbased learning algorithm is applied to network traffic prediction (realworld aggregate Ethernet traffic data) with comparable results. These results indicate that the generalization ability of the FIR neural network is improved by the proposed multiresolution learning algorithm. Index Terms—Finiteimpulseresponse (FIR) neural networks, multiresolution learning, network traffic prediction, wavelet transforms, wavelets. I.
Mechanisms of cognitive control: Active memory, inhibition, and the prefrontal cortex. in submission
, 1999
"... ..."
Homotopy Approaches For The Analysis And Solution Of Neural Network And Other Nonlinear Systems Of Equations
, 1995
"... Increasingly models, mappings, systems and algorithms used for signal processing need to be nonlinear in order to meet performance specifications in communications, computing and control systems applications. Simple computational models have been developed, including neural networks, which can effic ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Increasingly models, mappings, systems and algorithms used for signal processing need to be nonlinear in order to meet performance specifications in communications, computing and control systems applications. Simple computational models have been developed, including neural networks, which can efficiently implement a variety of nonlinear mappings through appropriate choice of model parameters. However, the design of arbitrary nonlinear mappings using these models and measured data requires both understanding how realizable (finite) systems perform if optimized given finite data, and a method for computing globally optimal system parameters. In this thesis, we use constructive homotopy methods both to geometrically explore the mapping capabilities of finite neural networks, and to rigorously develop a robust method for computing optimal solutions to systems of nonlinear equations which, like neural network equations, have an unknown number of solutionsand may have solutions at infinity.
A Modified Conjugate Gradient Formula for Back Propagation Neural Network Algorithm 1
"... Abstract: Problem statement: The Conjugate Gradient (CG) algorithm which usually used for solving nonlinear functions is presented and is combined with the modified Back Propagation (BP) algorithm yielding a new fast training multilayer algorithm. Approach: This study consisted of determination of n ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract: Problem statement: The Conjugate Gradient (CG) algorithm which usually used for solving nonlinear functions is presented and is combined with the modified Back Propagation (BP) algorithm yielding a new fast training multilayer algorithm. Approach: This study consisted of determination of new search directions by exploiting the information calculated by gradient descent as well as the previous search direction. The proposed algorithm improved the training efficiency of BP algorithm by adaptively modifying the initial search direction. Results: Performance of the proposed algorithm was demonstrated by comparing it with the Neural Network (NN) algorithm for the chosen test functions. Conclusion: The numerical results showed that number of iterations required by the proposed algorithm to converge was less than the both standard CG and NN algorithms. The proposed algorithm improved the training efficiency of BPNN algorithms by adaptively modifying the initial search direction. Key words: Backpropagation algorithm, conjugate gradient algorithm, search directions, neural network algorithm
Handwritten Digit Recognition with Binary Optical Perceptron
 in Artificial Neural Networks  ICANN'97, Lecture Notes in Computer Science
, 1997
"... . Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many orders of magnitudes improved liquid crystal re ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
. Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many orders of magnitudes improved liquid crystal response times. An optimized learning algorithm for allpositive perceptrons is simulated on a limited data set of handwritten digits and the resultant network implemented optically. First, grayscale and then binary inputs and weights are used in recall mode. On comparing the results for the example data set, the binarized inputs and weights network shows almost no loss in performance. IDIAPRR 9715 1 1 Introduction In hardware implementations of neural networks, it is attractive to use binary weights and inputs. In electronic implementations, reduction of chip area, reduced computation and improved system speed drive the motivation to enable the use of binary or a minimum number of dis...
Incorporating LCLV NonLinearities in Optical Multilayer Neural Networks
, 1996
"... : Sigmoidlike activation functions as available in analog hardware differ in various ways from the standard sigmoidal function as they are usually asymmetric, truncated, and have a nonstandard gain. We present an adaptation of the backpropagation learning rule to compensate for these nonstandard ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
: Sigmoidlike activation functions as available in analog hardware differ in various ways from the standard sigmoidal function as they are usually asymmetric, truncated, and have a nonstandard gain. We present an adaptation of the backpropagation learning rule to compensate for these nonstandard sigmoids. This method is applied to multilayer neural networks with alloptical forward propagation and liquid crystal light valves (LCLV) as optical thresholding devices. In this paper, the results of simulations of a backpropagation neural network with five different LCLV response curves as activation functions are presented. While performing poorly with the standard backpropagation algorithm, it is shown that our adapted learning rule performs well with these LCLV curves. Keywords: (artificial) neural network, optical multilayer neural network, hardware implementation, liquid crystal light valve (LCLV), activation function, curve fit, gain. 1 Introduction Optical implementations of mult...
International Journal of Computational Intelligence 4;1 2008 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks
"... Abstract—The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search ..."
Abstract
 Add to MetaCart
Abstract—The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm. Keywords—Adaptive gain variation, backpropagation, activation function, conjugate gradient, search direction. G I.