Results 1  10
of
49
A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm
 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS
, 1993
"... A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradientdescent, RPROP performs a local adaptation of the weightupdates according to the behaviour of the errorfunction. In substantial difference to other adaptive tech ..."
Abstract

Cited by 628 (32 self)
 Add to MetaCart
A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradientdescent, RPROP performs a local adaptation of the weightupdates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.
Local Gain Adaptation in Stochastic Gradient Descent
 In Proc. Intl. Conf. Artificial Neural Networks
, 1999
"... Gain adaptation algorithms for neural networks typically adjust learning rates by monitoring the correlation between successive gradients. Here we discuss the limitations of this approach, and develop an alternative by extending Sutton's work on linear systems to the general, nonlinear case. The res ..."
Abstract

Cited by 57 (12 self)
 Add to MetaCart
Gain adaptation algorithms for neural networks typically adjust learning rates by monitoring the correlation between successive gradients. Here we discuss the limitations of this approach, and develop an alternative by extending Sutton's work on linear systems to the general, nonlinear case. The resulting online algorithms are computationally little more expensive than other acceleration techniques, do not assume statistical independence between successive training patterns, and do not require an arbitrary smoothing parameter. In our benchmark experiments, they consistently outperform other acceleration methods, and show remarkable robustness when faced with noni. i.d. sampling of the input space.
Improving the Rprop Learning Algorithm
 PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON NEURAL COMPUTATION (NC 2000)
, 2000
"... The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks a ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
The Rprop algorithm proposed by Riedmiller and Braun is one of the best performing firstorder learning methods for neural networks. We introduce modifications of the algorithm that improve its learning speed. The resulting speedup is experimentally shown for a set of neural network learning tasks as well as for artificial error surfaces.
Comparison of Optimized Backpropagation Algorithms
 Proc. of ESANN'93, Brussels
, 1993
"... Backpropagation is one of the most famous training algorithms for multilayer perceptrons. Unfortunately it can be very slow for practical applications. Over the last years many improvement strategies have been developed to speed up backpropagation. It's very difficult to compare these different tech ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
Backpropagation is one of the most famous training algorithms for multilayer perceptrons. Unfortunately it can be very slow for practical applications. Over the last years many improvement strategies have been developed to speed up backpropagation. It's very difficult to compare these different techniques, because most of them have been tested on various specific data sets. Most of the reported results are based on some kind of tiny and artificial training sets like XOR, encoder or decoder. It's very doubtful if these results hold for more complicate practical application. In this report an overview of many different speedup techniques is given. All of them were assessed by a very hard practical classification task, which consists of a big medical data set. As you will see many of these optimized algorithms fail in learning the data set. 1 Introduction This report is intended to summarize our experience using many different speedup techniques for the backpropagation algorithm. We have...
Rprop  Description and Implementation Details
, 1994
"... F31.64> 4 ij (t). This is based on a signdependent adaptation process, similar to the learningrate adaptation in [4], [5]. 4 (t) ij = 8 ? ? ! ? ? : j + 4 (t\Gamma1) ij ; if @E @w ij (t\Gamma1) @E @w ij (t) ? 0 j \Gamma 4 (t\Gamma1) ij ; if @E @w ij (t\Gamma1) @E @w ij ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
F31.64> 4 ij (t). This is based on a signdependent adaptation process, similar to the learningrate adaptation in [4], [5]. 4 (t) ij = 8 ? ? ! ? ? : j + 4 (t\Gamma1) ij ; if @E @w ij (t\Gamma1) @E @w ij (t) ? 0 j \Gamma 4 (t\Gamma1) ij ; if @E @w ij (t\Gamma1) @E @w ij (t) ! 0 4 (t\Gamma1) ij ; else (2) where 0 ! j \Gamma ! 1 ! j + In words, the adaptationrule works as follows: Every time the partial
A note on the learning automata based algorithms for adaptive parameter selection in PSO
 Applied Soft Computing
, 2011
"... in PSO ..."
RPROP  A Fast Adaptive Learning Algorithm
 Proc. of ISCIS VII), Universitat
, 1992
"... In this paper, a new learning algorithm, RPROP, is proposed. To overcome the inherent disadvantages of the pure gradientdescent technique of the original backpropagation procedure, RPROP performs an adaptation of the weight updatevalues according to the behaviour of the errorfunction. The results ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
In this paper, a new learning algorithm, RPROP, is proposed. To overcome the inherent disadvantages of the pure gradientdescent technique of the original backpropagation procedure, RPROP performs an adaptation of the weight updatevalues according to the behaviour of the errorfunction. The results of RPROP on several learning tasks are shown in comparison to other wellknown adaptive learning algorithms. 1 Introduction Backpropagation is the most widely used algorithm for supervised learning with multilayered feedforward networks. The basic idea of the backpropagation learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary errorfunction E [1]: @E @w ij = @E @a i @a i @net i @net i @w ij (1) where w ij is the weight from neuron j to neuron i, a i is the activation value and net i is the weighted sum of the inputs of neuron i. Once the partial derivative for each weight is known, the a...
3D Hand Tracking by Rapid Stochastic Gradient Descent using a Skinning Model
 1st European Conference on Visual Media Production (CVMP
, 2004
"... Abstract The main challenge of tracking articulated structures like hands is their large number of degrees of freedom (DOFs). A realistic 3D model of the human hand has at least 26 DOFs. The arsenal of tracking approaches that can track such structures fast and reliably is still very small. This pa ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Abstract The main challenge of tracking articulated structures like hands is their large number of degrees of freedom (DOFs). A realistic 3D model of the human hand has at least 26 DOFs. The arsenal of tracking approaches that can track such structures fast and reliably is still very small. This paper proposes a tracker based on ‘Stochastic MetaDescent ’ (SMD) for optimizations in such highdimensional state spaces. This new algorithm is based on a gradient descent approach with adaptive and parameterspecific step sizes. The SMD tracker facilitates the integration of constraints, and combined with a stochastic sampling technique, can get out of spurious local minima. Furthermore, the integration of a deformable hand model based on linear blend skinning and anthropometrical measurements reinforce the robustness of our tracker. Experiments show the efficiency of the SMD algorithm in comparison with common optimization methods. 1
FANNC: A Fast Adaptive Neural Network Classifier
, 2000
"... In this paper, a fast adaptive neural network classifier named FANNC is proposed. FANNC exploits the advantages of both adaptive resonance theory and field theory. It needs only onepass learning, and achieves not only high predictive accuracy but also fast learning speed. Besides, FANNC has increme ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
In this paper, a fast adaptive neural network classifier named FANNC is proposed. FANNC exploits the advantages of both adaptive resonance theory and field theory. It needs only onepass learning, and achieves not only high predictive accuracy but also fast learning speed. Besides, FANNC has incremental learning ability. When new instances are fed, it does not need to retrain the whole training set. Instead, it could learn the knowledge encoded in those instances through slightly adjusting the network topology when necessary, that is, adaptively appending one or two hidden units and corresponding connections to the existing network. This characteristic makes FANNC fit for realtime online learning tasks. Moreover, since the network architecture is adaptively set up, the disadvantage of manually determining the number of hidden units of most feedforward neural networks is overcome. Benchmark tests show that FANNC is a preferable neural network classifier, which is superior to several other neural algorithms on both predictive accuracy and learning speed.