Results 1 
5 of
5
Symbolic and neural learning algorithms: an experimental comparison
 Machine Learning
, 1991
"... Abstract Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with ..."
Abstract

Cited by 99 (6 self)
 Add to MetaCart
Abstract Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perception and backpropagation neural learning algorithms have been performed using five large, realworld data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a "distributed " output encoding.
Generation of Explicit Knowledge from Empirical Data through Pruning of Trainable Neural Networks
"... This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements), 2) ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements), 2) using of adjustable and flexible pruning process (the pruning sequence shouldn't be predetermined  the user should have a possibility to prune network on his own way in order to achieve a desired network structure for the purpose of extraction of rules of desired type and form), and 3) extraction of rules not in predetermined but any desired form. Some considerations and notes about network architecture and training process and applicability of currently developed pruning techniques and rule extraction algorithms are discussed. This technology, being developed by us for more than 10 years, allowed us to create dozens of knowledgebased expert systems.
Investigating neural network efficiency and structure by weight investigation
, 2000
"... This research investigates the analysis and efficiency of neural networks, using a technique for network link pruning. The technique is tested with inefficient architectures for the XOR problem and then for a network from a real world, complex, image recognition task. By removing each link and exami ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This research investigates the analysis and efficiency of neural networks, using a technique for network link pruning. The technique is tested with inefficient architectures for the XOR problem and then for a network from a real world, complex, image recognition task. By removing each link and examining effect upon error level, a fuzzy set is developed with membership indicating link saliency. As well as efficiency, the technique is useful to investigate solution architecture. It is hypothesised that similar insights may be gained for any problem solved by similar architecture This paper begins with the background, research and possible applications. Experimental design, implementation, methodology and results are given. The conclusion considers implications and suggests further research. Results indicate that this technique can significantly improve efficiency of a neural network for a real application. Both memory requirements and execution speeds improve by nearly 30 times. Further development is hoped to deliver improvements to efficiency and depth of investigation. KEYWORDS: Image processing; Neural networks; Pruning; Skeletonising; Face Recognition 1. BACKGROUND RESEARCH There are few known practical design steps for the architure of a neural net modeling a complex problem space. Huang and Huang, (1991) consider theoretical methods to assess bounds on the number of hidden neurons. However the solution is itself too theoretical for practical application. Others suggest the use of principal components to design the required number of hidden neurons but this is only a heuristic. Another approach is to use a fully connected net,with what is thought to be a sufficiently large size. If a net over generalises then it may be increased in size, otherwise, if it is not ge...
An Application of Pruning in the Design of Neural Networks for Real Time Flood Forecasting
, 2005
"... We propose the application of pruning in the design of neural networks for hydrological prediction. The basic idea of pruning algorithms, which have not been used in water resources problems yet, is to start from a network which is larger than necessary, and then remove the parameters that are le ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose the application of pruning in the design of neural networks for hydrological prediction. The basic idea of pruning algorithms, which have not been used in water resources problems yet, is to start from a network which is larger than necessary, and then remove the parameters that are less influential one at a time, designing a much more parameterparsimonious model. We compare pruned and complete predictors on two quite different Italian catchments. Remarkably, pruned models may provide better generalization than fully connected ones, thus improving the quality of the forecast.
Academic Group
"... Abstract—The tangent plane algorithm is a fast sequential learning method for multilayered feedforward neural networks that accepts almost zero initial conditions for the connection weights with the expectation that only the minimum number of weights will be activated. However, the inclusion of a te ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—The tangent plane algorithm is a fast sequential learning method for multilayered feedforward neural networks that accepts almost zero initial conditions for the connection weights with the expectation that only the minimum number of weights will be activated. However, the inclusion of a tendency to move away from the origin in weight space can lead to large weights that are harmful to generalization. This paper evaluates two techniques used to limit the size of the weights, weight growing and weight elimination, in the tangent plane algorithm. Comparative tests were carried out using the Extreme Learning Machine which is a fast global minimiser giving good generalization. Experimental results show that the generalization performance of the tangent plane algorithm with weight elimination is at least as good as the ELM algorithm making it a suitable alternative for problems that involve time varying data such as EEG and ECG signals. Keywords—neural networks; backpropagation; generalization; tangent plane; weight elimination; extreme learning machine I.