Results 1 
3 of
3
Accelerated Backpropagation Learning: Parallel Tangent Optimization Algorithm
 IEEE Tralisactions on A ulomatic Conirol
, 1983
"... INTRODUCTION The method of gradient descent is one of the most fundamental procedures for minimizing a differentiable function of several variables. It usually works quite well during the early stages of the optimization process. However, as an optimum point is approached, the method usually behave ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
INTRODUCTION The method of gradient descent is one of the most fundamental procedures for minimizing a differentiable function of several variables. It usually works quite well during the early stages of the optimization process. However, as an optimum point is approached, the method usually behaves poorly, where small orthogonal steps are taken (zigzagging phenomena)[2]. There are several methods that are able to generate noninterfering directions and can be used to overcome the difficulties of oscillations by deflecting the gradient. Rather than moving along 0rf(x), one can move along 0Hrf(x) or along 0rf(x)+v, where H is an appropriate matrix, and v is an appropriate vector. The method of Newton uses the first one and deflects the gradient descent direction by premultiplying it by the inverse of the Hessia
Accelerated Backpropagation Learning: Extended Dynamic Parallel Tangent Optimization Algorithm
 Lecture Notes in Artificial Intelligence 1822
, 2000
"... The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and eectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the ineciency of zigza ..."
Abstract
 Add to MetaCart
The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and eectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the ineciency of zigzagging in the conventional backpropagation algorithm, one of the authors earlier proposed the use of a deecting gradient technique to improve the convergence of backpropagation learning algorithm. The proposed method is called Partan backpropagation learning algorithm[3]. The convergence time of multilayer networks has further improved through dynamic adaptation of their learning rates[6]. In this paper, an extension to the dynamic parallel tangent learning algorithm is proposed. In the proposed algorithm, each connection has its own learning as well as acceleration rate. These individual rates are dynamically adapted as the learning proceeds. Simulation studies are carried out on dierent learning problems. Faster rate of convergence is achieved for all problems used in the simulations. Keywords: Articial neural networks, Backpropagation, Gradient descent, Parallel tangent, Dynamic parallel tangent. 1