## Training Algorithms for Limited Precision Feedforward Neural Networks (1991)

Citations: | 2 - 0 self |

### BibTeX

@TECHREPORT{Xie91trainingalgorithms,

author = {Yun Xie and Marwan A. Jabri},

title = {Training Algorithms for Limited Precision Feedforward Neural Networks},

institution = {},

year = {1991}

}

### OpenURL

### Abstract

In this paper we analyse the training dynamics of limited precision feedforward multilayer perceptrons implemented in digital hardware. We show that special techniques have to be employed to train such networks where each variable is quantised to a limited number of bits. Based on the analysis, we propose a Combined Search (CS) training algorithm which consists of partial random search and weight perturbation and can easily be implemented in hardware. Computer simulations were conducted on IntraCardiac ElectroGrams and sonar reflection pattern classification problems. The results show that using CS, the training performance of limited precision feedforward MLPs with 8 to 10 bit resolution can be as good as that of unlimited precision networks. The results also show that CS is insensitive to training parameter variations. 1 Introduction When neural networks are to be used on limited precision digital hardware, problems may arise in their training because all network parameter...

### Citations

188 |
Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks
- Gorman, Sejnowski
- 1988
(Show Context)
Citation Context ... CS 98.4 0.746 89.3 1.13 PRS 98.3 0.780 89.1 2.18 MWP fail WP fail BP fail smaller standard deviation. 4.3 Sonar Reflection Recognition The second set of simulations uses Sonar Reflection Recognition =-=[10]-=-. An input pattern represents a reflection of sonar signals from a metal cylinder or a rock at various angles and under various conditions. The network is expected to classify a pattern as a reflectio... |

62 | Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks
- Jabri, Flower
- 1992
(Show Context)
Citation Context ...ining but difficult to implement in hardware. This is due to its high precision requirements (12 to 16 bits) as it involves a relatively large number of computations in the forward and backward paths =-=[3,1,6]-=-. For hardware implementation simplicity, the Weight Perturbation algorithm (WP) [6] is more attractive. This was originally proposed for analog neural networks. The idea is that by injecting a small ... |

38 | Finite precision error analysis of neural network hardwareimplementations
- Holt, Hwang
- 1993
(Show Context)
Citation Context ...y be heavily affected by the quantisation and their training may become much harder. Several researchers have conducted simulations that implement existing training algorithms using limited precision =-=[1,2,3]-=-. However, few attempts have been made to develop training algorithms specially geared for limited precision. The work described in this paper was conducted to investigate the performance and the trai... |

33 |
A Neural-Net Training Program based on Conjugate-Gradient Optimization
- BARNARD, COLE
- 1989
(Show Context)
Citation Context ... bit resolution). Table 1 is the summary of the training results. In Table 1, the first two rows show the training results on unlimited precision networks trained using a conjugate gradient algorithm =-=[9]-=- and the Combined Search algorithm. In the limited precision case, Back-Propagation failed to converge with 10 bit resolution although many different values for the learning rate and momentum were tri... |

30 | Learning with Limited Numerical Precision using the Cascadeâ€“ Correlation Algorithm
- Hoehfeld, Fahlman
- 1992
(Show Context)
Citation Context ...y be heavily affected by the quantisation and their training may become much harder. Several researchers have conducted simulations that implement existing training algorithms using limited precision =-=[1,2,3]-=-. However, few attempts have been made to develop training algorithms specially geared for limited precision. The work described in this paper was conducted to investigate the performance and the trai... |

25 |
The Effects of Precision Constraints in a Backpropagation Learning
- Hollis, Harper, et al.
- 1990
(Show Context)
Citation Context ...y be heavily affected by the quantisation and their training may become much harder. Several researchers have conducted simulations that implement existing training algorithms using limited precision =-=[1,2,3]-=-. However, few attempts have been made to develop training algorithms specially geared for limited precision. The work described in this paper was conducted to investigate the performance and the trai... |

24 |
Neural Computing. Theory and
- Wasserman
- 1989
(Show Context)
Citation Context ...d the search range is adapted as to maintain it larger than any weight value. The adaptation of the search range during training prevent the weights taking too large values and saturating the network =-=[8]-=-. However, search range adaptation can be achieved indirectly using neuron gain adjustment, a technique that we adopt in our approach. Gain adaptation has been described in Section 3.1 above and is si... |

22 |
Learning internal representations by error propogation
- Rumelhart, Hinton, et al.
- 1986
(Show Context)
Citation Context ...compared to the unlimited precision case, ffl can escape plateaus and local minima, and ffl be simple for hardware implementation. Backpropagation (BP) is the most popular training algorithm for MLPs =-=[5]-=-. It is efficient in training but difficult to implement in hardware. This is due to its high precision requirements (12 to 16 bits) as it involves a relatively large number of computations in the for... |

22 | Benefits of gain: Speeded learning and minimal hidden layers in back-propagation networks
- Kruschke, Movellan
- 1991
(Show Context)
Citation Context ...g its input weights with an appropriate value. Therefore, when an unlimited precision neural network is trained, the gains are usually kept constant although gain adaptation may produce some benefits =-=[7]-=-. But in limited precision networks with fixed point representation, the inputs, weights and neurons states are represented by a finite number of bits in fixed point format. Thus, during training, the... |

12 | Analysis of the effects of quantization in multilayer neural networks using a statistical model
- Xie, MA
- 1992
(Show Context)
Citation Context ...ussed briefly. Throughout the paper, and unless indicated otherwise, we use the term MLP to indicate a feedforward multi-layer perceptron network. 2 Quantization Effects on Neural Network Training In =-=[4]-=-, we have used statistical models to analyze the static effects of quantization on MLPs. As for the dynamics of the training process, the major effect of quantization (especially when small numbers of... |