Results 1  10
of
25
Enhanced MLP Performance and Fault Tolerance Resulting from Synaptic Weight Noise During Training
 IEEE Transactions on Neural Networks
, 1994
"... We analyse the effects of analog noise on the synaptic arithmetic during MultiLayer Perceptron training, by expanding the cost function to include noisemediated terms. Predictions are made in the light of these calculations which suggest that fault tolerance, training quality and training trajector ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
(Show Context)
We analyse the effects of analog noise on the synaptic arithmetic during MultiLayer Perceptron training, by expanding the cost function to include noisemediated terms. Predictions are made in the light of these calculations which suggest that fault tolerance, training quality and training trajectory should be improved by such noiseinjection. Extensive simulation experiments on two distinct classification problems substantiate the claims. The results appear to be perfectly general for all training schemes where weights are adjusted incrementally, and have wideranging implications for all applications, particularly those involving "inaccurate" analog neural VLSI. 1 Introduction and Background Arithmetic inaccuracy at the synapse and neuron level is widely held to be tolerable during neural computation, but not during training. In arriving at this conclusion, parallels are drawn between analog noiseinduced "uncertainty", and digital inaccuracy, limited by bitlength. This has lead ...
Training Digital Circuits with Hamming Clustering
 IEEE TRANSACTIONS ON CIRCUIT AND SYSTEMS
, 2000
"... A new algorithm, called Hamming Clustering (HC), for the solution of classification problems with binary inputs is proposed. It builds a logical network containing only and, or and not ports, which, besides satisfying all the inputoutput pairs included in a given finite consistent training set, ..."
Abstract

Cited by 19 (15 self)
 Add to MetaCart
(Show Context)
A new algorithm, called Hamming Clustering (HC), for the solution of classification problems with binary inputs is proposed. It builds a logical network containing only and, or and not ports, which, besides satisfying all the inputoutput pairs included in a given finite consistent training set, is able to reconstruct the underlying Boolean function. The basic
Constructive Training Methods for Feedforward Neural Networks with Binary Weights
, 1995
"... Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. A neural model with each weight limited to a small integer range will require little surface of silicon. Moreover, according to Ockham's razor principl ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. A neural model with each weight limited to a small integer range will require little surface of silicon. Moreover, according to Ockham's razor principle, better generalization abilities can be expected from a simpler computational model. The price to pay for these benefits lies in the difficulty to train these kind of networks. This paper proposes essentially two new ideas for constructive training algorithms, and demonstrates their efficiency for the generation of feedforward networks composed of Boolean threshold gates with discrete weights. A proof of the convergence of these algorithms is given. Some numerical experiments have been carried out and the results are presented in terms of the size of the generated networks and of their generalization abilities. 1 Introduction Artificial neural networks (ANN) are proposed today as alternative...
Training Algorithms for Limited Precision Feedforward Neural Networks
, 1991
"... In this paper we analyse the training dynamics of limited precision feedforward multilayer perceptrons implemented in digital hardware. We show that special techniques have to be employed to train such networks where each variable is quantised to a limited number of bits. Based on the analysis, we p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper we analyse the training dynamics of limited precision feedforward multilayer perceptrons implemented in digital hardware. We show that special techniques have to be employed to train such networks where each variable is quantised to a limited number of bits. Based on the analysis, we propose a Combined Search (CS) training algorithm which consists of partial random search and weight perturbation and can easily be implemented in hardware. Computer simulations were conducted on IntraCardiac ElectroGrams and sonar reflection pattern classification problems. The results show that using CS, the training performance of limited precision feedforward MLPs with 8 to 10 bit resolution can be as good as that of unlimited precision networks. The results also show that CS is insensitive to training parameter variations. 1 Introduction When neural networks are to be used on limited precision digital hardware, problems may arise in their training because all network parameter...
Incremental Communication for Multilayer Neural Networks: Error Analysis
, 1995
"... Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed an incremental internode communication method. In the incremental communication method, instead of communicati ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent on a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not enforce instability. The analysis is supported by simulation studies of two problems. The simulation results ...
Incremental communication for adaptive resonance theory networks
 MCS thesis, Faculty Comput. Sci., Univ
, 1998
"... Abstract—We have proposed earlier the incremental internode communication method to reduce the communication cost as well as the time of the learning process in artificial neural netwofrks (ANNs). In this paper, the limited precision incremental communication method is applied to a class of recurren ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We have proposed earlier the incremental internode communication method to reduce the communication cost as well as the time of the learning process in artificial neural netwofrks (ANNs). In this paper, the limited precision incremental communication method is applied to a class of recurrent neural networks, the adaptive resonance theory 2 (ART2) networks. Simulation studies are carried out to examine the effects of the incremental communication method on the convergence behavior of ART2 networks. We have found that 7–13b precision is sufficient to obtain almost the same results as those with full (32b) precision conventional communication. A theoretical error analysis is also carried out to analyze the effects of the limited precision incremental communication. The simulation and analytical results show that the limited precision errors are bounded and do not seriously degrade the convergence of ART2 networks. Therefore, the incremental communication can be incorporated in parallel and specialpurpose very large scale integration (VLSI) implementations of the ART2 networks. Index Terms—Adaptive resonance theory 2 (ART2) networks, artificial neural networks (ANNs), error analysis, finite precision computation, incremental communication. I.
Design and Nonlinear Modelling of CMOS Multipliers for Analog VLSI Implementation of Neural Algorithms
, 2006
"... : The analog VLSI implementation looks an attractive way for implementing Artificial Neural Networks; in fact, it gives small area, low power consumption and compact design of neural computational primitive circuits. On the other hand, major drawbacks result to be the low computational accuracy and ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
: The analog VLSI implementation looks an attractive way for implementing Artificial Neural Networks; in fact, it gives small area, low power consumption and compact design of neural computational primitive circuits. On the other hand, major drawbacks result to be the low computational accuracy and the nonlinear behaviour of analog circuits. In this paper, we present the design and the detailed behavioural models of CMOS multipliers for the analog VLSI implementation of neural algorithms. The circuits implement the feedforward operations of the Multi Layer Perceptron architecture and of the Back Propagation (onchip learning) algorithm; they operate in the subthreshold regime to obtain a low power consumption and high dynamic range of weights. The circuit behavioural models take into account: i) nonlinearity effects; ii) environmental effects (variations of temperature and of signal reference voltage). The models that we present in this paper, are used in the behavioural validation o...
A Faster Learning Neural Network Classifier Using Selective Backpropagation
, 1997
"... The problem of saturation in neural network classification problems is discussed. The listprop algorithm is presented which reduces saturation and dramatically increases the rate of convergence. The technique uses selective application of the backpropagation algorithm, such that training is only car ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The problem of saturation in neural network classification problems is discussed. The listprop algorithm is presented which reduces saturation and dramatically increases the rate of convergence. The technique uses selective application of the backpropagation algorithm, such that training is only carried out for patterns which have not yet been learnt to a desired output activation tolerance. Furthermore, in the output layer, training is only carried out for weights connected to those output neurons in the output vector which are still in error, which further reduces neuron saturation and learning time. Results are presented for a 19610046 MultiLayer Perceptron (MLP) neural network used for texttospeech conversion, which show that convergence is achieved for up to 99.7% of the training set compared to at best 94.8% for standard backpropagation. Convergence is achieved in 38% of the time taken by the standard algorithm. I. INTRODUCTION It is well known that standard feedforward m...
Brainsize neurocomputers: analyses and simulations of neural topologies mapped on Fractal Architectures
 Dept. of Experimental and Theoretical Psychology, Leiden
, 1995
"... ..."