Results 1 
2 of
2
Oversearching and Layered Search in Empirical Learning
 In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence
, 1995
"... When learning classifiers, more extensive search for rules is shown to lead to lower predictive accuracy on many of the realworld domains investigated. This counterintuitive result is particularly relevant to recent systematic search methods that use riskfree pruning to achieve the same outcome a ..."
Abstract

Cited by 86 (0 self)
 Add to MetaCart
When learning classifiers, more extensive search for rules is shown to lead to lower predictive accuracy on many of the realworld domains investigated. This counterintuitive result is particularly relevant to recent systematic search methods that use riskfree pruning to achieve the same outcome as exhaustive search. We propose an iterated search method that commences with greedy search, extending its scope at each iteration until a stopping criterion is satisfied. This layered search is often found to produce theories that are more accurate than those obtained with either greedy search or moderately extensive beam search. 1 Introduction Mitchell [1982] observes that the generalization implicit in learning from examples can be viewed as a search over the space of possible theories. From this perspective, most machine learning methods carry out a series of local searches in the vicinity of the current theory, selecting at each step the most promising improvement. Covering algorithms ...
Self Organizing Mixture Network in Mixture Discriminant Analysis: An Experimental Study
"... Abstract—In the recent works related with mixture discriminant analysis (MDA), expectation and maximization (EM) algorithm is used to estimate parameters of Gaussian mixtures. But, initial values of EM algorithm affect the final parameters ’ estimates. Also, when EM algorithm is applied two times, f ..."
Abstract
 Add to MetaCart
Abstract—In the recent works related with mixture discriminant analysis (MDA), expectation and maximization (EM) algorithm is used to estimate parameters of Gaussian mixtures. But, initial values of EM algorithm affect the final parameters ’ estimates. Also, when EM algorithm is applied two times, for the same data set, it can be give different results for the estimate of parameters and this affect the classification accuracy of MDA. Forthcoming this problem, we use Self Organizing Mixture Network (SOMN) algorithm to estimate parameters of Gaussians mixtures in MDA that SOMN is more robust when random the initial values of the parameters are used [5]. We show effectiveness of this method on popular simulated waveform datasets and real glass data set.