Results 1 
8 of
8
Symbolic and neural learning algorithms: an experimental comparison
 Machine Learning
, 1991
"... Abstract Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with ..."
Abstract

Cited by 109 (6 self)
 Add to MetaCart
Abstract Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perception and backpropagation neural learning algorithms have been performed using five large, realworld data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a &quot;distributed &quot; output encoding.
An Application of Pruning in the Design of Neural Networks for Real Time Flood Forecasting
, 2005
"... We propose the application of pruning in the design of neural networks for hydrological prediction. The basic idea of pruning algorithms, which have not been used in water resources problems yet, is to start from a network which is larger than necessary, and then remove the parameters that are le ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose the application of pruning in the design of neural networks for hydrological prediction. The basic idea of pruning algorithms, which have not been used in water resources problems yet, is to start from a network which is larger than necessary, and then remove the parameters that are less influential one at a time, designing a much more parameterparsimonious model. We compare pruned and complete predictors on two quite different Italian catchments. Remarkably, pruned models may provide better generalization than fully connected ones, thus improving the quality of the forecast.
Academic Group
"... Abstract—The tangent plane algorithm is a fast sequential learning method for multilayered feedforward neural networks that accepts almost zero initial conditions for the connection weights with the expectation that only the minimum number of weights will be activated. However, the inclusion of a te ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The tangent plane algorithm is a fast sequential learning method for multilayered feedforward neural networks that accepts almost zero initial conditions for the connection weights with the expectation that only the minimum number of weights will be activated. However, the inclusion of a tendency to move away from the origin in weight space can lead to large weights that are harmful to generalization. This paper evaluates two techniques used to limit the size of the weights, weight growing and weight elimination, in the tangent plane algorithm. Comparative tests were carried out using the Extreme Learning Machine which is a fast global minimiser giving good generalization. Experimental results show that the generalization performance of the tangent plane algorithm with weight elimination is at least as good as the ELM algorithm making it a suitable alternative for problems that involve time varying data such as EEG and ECG signals. Keywords—neural networks; backpropagation; generalization; tangent plane; weight elimination; extreme learning machine I.
Investigating neural network efficiency and structure by weight investigation
, 2000
"... This research investigates the analysis and efficiency of neural networks, using a technique for network link pruning. The technique is tested with inefficient architectures for the XOR problem and then for a network from a real world, complex, image recognition task. By removing each link and exami ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This research investigates the analysis and efficiency of neural networks, using a technique for network link pruning. The technique is tested with inefficient architectures for the XOR problem and then for a network from a real world, complex, image recognition task. By removing each link and examining effect upon error level, a fuzzy set is developed with membership indicating link saliency. As well as efficiency, the technique is useful to investigate solution architecture. It is hypothesised that similar insights may be gained for any problem solved by similar architecture This paper begins with the background, research and possible applications. Experimental design, implementation, methodology and results are given. The conclusion considers implications and suggests further research. Results indicate that this technique can significantly improve efficiency of a neural network for a real application. Both memory requirements and execution speeds improve by nearly 30 times. Further development is hoped to deliver improvements to efficiency and depth of investigation. KEYWORDS: Image processing; Neural networks; Pruning; Skeletonising; Face Recognition 1. BACKGROUND RESEARCH There are few known practical design steps for the architure of a neural net modeling a complex problem space. Huang and Huang, (1991) consider theoretical methods to assess bounds on the number of hidden neurons. However the solution is itself too theoretical for practical application. Others suggest the use of principal components to design the required number of hidden neurons but this is only a heuristic. Another approach is to use a fully connected net,with what is thought to be a sufficiently large size. If a net over generalises then it may be increased in size, otherwise, if it is not ge...
Generation of Explicit Knowledge from Empirical Data through Pruning of Trainable Neural Networks
"... This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements), 2) ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents a generalized technology of extraction of explicit knowledge from data. The main ideas are 1) maximal reduction of network complexity (not only removal of neurons or synapses, but removal all the unnecessary elements and signals and reduction of the complexity of elements), 2) using of adjustable and flexible pruning process (the pruning sequence shouldn't be predetermined  the user should have a possibility to prune network on his own way in order to achieve a desired network structure for the purpose of extraction of rules of desired type and form), and 3) extraction of rules not in predetermined but any desired form. Some considerations and notes about network architecture and training process and applicability of currently developed pruning techniques and rule extraction algorithms are discussed. This technology, being developed by us for more than 10 years, allowed us to create dozens of knowledgebased expert systems.
Improved Generalization in Recurrent Neural Networks Using the Tangent Plane Algorithm
"... Abstract—The tangent plane algorithm for real time recurrent learning (TPARTRL) is an effective online training method for fully recurrent neural networks. TPARTRL uses the method of approaching tangent planes to accelerate the learning processes. Compared to the original gradient descent real tim ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The tangent plane algorithm for real time recurrent learning (TPARTRL) is an effective online training method for fully recurrent neural networks. TPARTRL uses the method of approaching tangent planes to accelerate the learning processes. Compared to the original gradient descent real time recurrent learning algorithm (GDRTRL) it is very fast and avoids problems like local minima of the search space. However, the TPARTRL algorithm actively encourages the formation of large weight values that can be harmful to generalization. This paper presents a new TPARTRL variant that encourages small weight values to decay to zero by using a weight elimination procedure built into the geometry of the algorithm. Experimental results show that the new algorithm gives good generalization over a range of network sizes whilst retaining the fast convergence speed of the TPARTRL algorithm. Keywords—real time recurrent learning; tangent plane; generalization; weight elimination; temporal pattern recognition; nonlinear process control I.
University advisor:
, 2008
"... In this thesis we present a theoretical investigation of the feasibility of using a problem specific inductive bias for backpropagated neural networks. We argue that if a learning algorithm is biased towards optimizing a certain performance measure, it is plausible to assume that it will generate a ..."
Abstract
 Add to MetaCart
(Show Context)
In this thesis we present a theoretical investigation of the feasibility of using a problem specific inductive bias for backpropagated neural networks. We argue that if a learning algorithm is biased towards optimizing a certain performance measure, it is plausible to assume that it will generate a higher performance score when evaluated using that particular measure. We use the term measure function for a multicriteria evaluation function that can also be used as an inherent function in learning algorithms, in order to customize the bias of a learning algorithm for a specific problem. Hence, the term measurebased learning algorithms. We discuss different characteristics of the most commonly used performance measures and establish
BIOINFORMATICS 才 OSCAR: Oneclass SVM for Accurate Recognition of Ciselements
"... Motivation: Traditional methods to identify potential binding sites of known transcription factors still suffer from large number of false predictions. They mostly use sequence information in a positionspecific manner and neglect other types of information hidden in the proximal promoter regions. R ..."
Abstract
 Add to MetaCart
(Show Context)
Motivation: Traditional methods to identify potential binding sites of known transcription factors still suffer from large number of false predictions. They mostly use sequence information in a positionspecific manner and neglect other types of information hidden in the proximal promoter regions. Recent biological and computational researches, however, suggest that there exist not only locational preferences of binding, but also correlations between transcription factors. Results: In this paper, we propose a novel approach, OSCAR, which utilizes oneclass SVM algorithms, and incorporates multiple factors to aid the recognition of transcription factor binding sites. Using both synthetic and real data, we find that our method outperforms existing algorithms, especially in the high sensitivity region. The performance of our method can be further improved by taking into account locational preference of binding events. By testing on experimentallyverified binding sites of GATA and HNF transcription factor families, we show that our algorithm can infer the true cooccurring motif pairs accurately, and by considering the cooccurrences of correlated motifs, we not only filter out false predictions, but also increase the sensitivity. Availability: An online server based on OSCAR is available