## Alopex: a correlation-based learning algorithm for feedforward and recurrent neural networks (1994)

Venue: | Neural Computation |

Citations: | 24 - 1 self |

### BibTeX

@ARTICLE{Unnikrishnan94alopex:a,

author = {K. P. Unnikrishnan and K. P. Venugopal},

title = {Alopex: a correlation-based learning algorithm for feedforward and recurrent neural networks},

journal = {Neural Computation},

year = {1994},

volume = {6},

pages = {469--490}

}

### Years of Citing Articles

### OpenURL

### Abstract

We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assump-tions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbi-trary transfer functions and for minimizing a large class of error measures. The learn-ing algorithm is the same for feed-forward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows com-plete parallelization of the algorithm. The algorithm is stochastic and it uses a ‘tem-perature ’ parameter in a manner similar to that in simulated annealing. A heuristic ‘ annealing schedule ’ is presented which is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advan-tages and show that learning times are comparable to those for standard gradient des-cent methods. Feed-forward networks trained with Alopex are used to solve the MONK’s problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.