Results 1  10
of
13
An extension on ―statistical comparisons of classifiers over multiple data sets‖ for all pairwise comparisons
 Journal of Machine Learning Research
"... In a recently published paper in JMLR, Demˇsar (2006) recommends a set of nonparametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic ..."
Abstract

Cited by 148 (36 self)
 Add to MetaCart
In a recently published paper in JMLR, Demˇsar (2006) recommends a set of nonparametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic procedures and some of the most advanced ones when comparing a control method. However, it does not deal with some advanced topics in depth. Regarding these topics, we focus on more powerful proposals of statistical procedures for comparing n×n classifiers. Moreover, we illustrate an easy way of obtaining adjusted and comparable pvalues in multiple comparison procedures.
STATISTICAL PARAMETRIC SPEECH SYNTHESIS USING DEEP NEURAL NETWORKS
"... Conventional approaches to statistical parametric speech synthesis typically use decision treeclustered contextdependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
Conventional approaches to statistical parametric speech synthesis typically use decision treeclustered contextdependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their output probabilities, then a speech waveform is reconstructed from the generated parameters. This approach is reasonably effective but has a couple of limitations, e.g. decision trees are inefficient to model complex context dependencies. This paper examines an alternative scheme that is based on a deep neural network (DNN). The relationship between input texts and their acoustic realizations is modeled by a DNN. The use of the DNN can address some limitations of the conventional approach. Experimental results show that the DNNbased systems outperformed the HMMbased systems with similar numbers of parameters. Index Terms — Statistical parametric speech synthesis; Hidden Markov model; Deep neural network;
Anytime learning of anycost classifiers
"... The classification of new cases using a predictive model incurs two types of costs—testing costs and misclassification costs. Recent research efforts have resulted in several novel algorithms that attempt to produce learners that simultaneously minimize both types. In many real life scenarios, howe ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The classification of new cases using a predictive model incurs two types of costs—testing costs and misclassification costs. Recent research efforts have resulted in several novel algorithms that attempt to produce learners that simultaneously minimize both types. In many real life scenarios, however, we cannot afford to conduct all the tests required by the predictive model. For example, a medical center might have a fixed predetermined budget for diagnosing each patient. For cost bounded classification, decision trees are considered attractive as they measure only the tests along a single path. In this work we present an anytime framework for producing decisiontree based classifiers that can make accurate decisions within a strict bound on testing costs. These bounds can be known to the learner, known to the classifier but not to the learner, or not predetermined. Extensive experiments with a variety of datasets show that our proposed framework produces trees with lower misclassification costs along a wide range of testing cost bounds.
Anytime induction of lowcost, lowerror classifiers: a samplingbased approach
, 2008
"... Machine learning techniques are gaining prevalence in the production of a wide range of classifiers for complex realworld applications with nonuniform testing and misclassification costs. The increasing complexity of these applications poses a real challenge to resource management during learning a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Machine learning techniques are gaining prevalence in the production of a wide range of classifiers for complex realworld applications with nonuniform testing and misclassification costs. The increasing complexity of these applications poses a real challenge to resource management during learning and classification. In this work we introduce ACT (anytime costsensitive tree learner), a novel framework for operating in such complex environments. ACT is an anytime algorithm that allows learning time to be increased in return for lower classification costs. It builds a tree topdown and exploits additional time resources to obtain better estimations for the utility of the different candidate splits. Using sampling techniques, ACT approximates the cost of the subtree under each candidate split and favors the one with a minimal cost. As a stochastic algorithm, ACT is expected to be able to escape local minima, into which greedy methods may be trapped. Experiments with a variety of datasets were conducted to compare ACT to the stateoftheart costsensitive tree learners. The results show that for the majority of domains ACT produces significantly less costly trees. ACT also exhibits good anytime behavior with diminishing returns. 1.
Learning in a fixed or evolving network of agents
"... This paper investigates incremental multiagent learning in static or evolving structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper investigates incremental multiagent learning in static or evolving structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication constraints. Recently, a first mechanism was proposed to deal with static networks, but its accuracy was reduced in some topologies. We propose here several possible improvements of this mechanism, whose different behaviors with respect to some efficiency requirements (redundancy, computational cost and communicational cost) are experimentally investigated. Then, we provide an experimental analysis of some variants for evolving networks. 1.
Argumentationbased agent support for learning policies in
"... a coalition mission ..."
(Show Context)
Optimal Constraintbased Decision Tree Induction from Itemset Lattices
, 2010
"... constraintbased decision tree induction from itemset lattices ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
constraintbased decision tree induction from itemset lattices
2 evtree: Evolutionary Learning of Globally Optimal Trees in R
"... however, such stochastic methods are rarely used in decision tree induction. One reason is probably that they are computationally much more demanding than a recursive forward search but another one is likely to be the lack of availability in major software packages. In particular, while there are se ..."
Abstract
 Add to MetaCart
(Show Context)
however, such stochastic methods are rarely used in decision tree induction. One reason is probably that they are computationally much more demanding than a recursive forward search but another one is likely to be the lack of availability in major software packages. In particular, while there are several packages for R (R Core Team 2013) providing forwardsearch tree algorithms, there is only little support for globally optimal trees. The former group of packages includes (among others) rpart (Therneau and Atkinson 1997), the opensource implementation of the CART algorithm; party, containing two tree algorithms with unbiased variable selection and statistical stopping criteria (Hothorn, Hornik, and Zeileis 2006; Zeileis, Hothorn, and Hornik 2008); and RWeka (Hornik, Buchta, and Zeileis 2009), the R interface to Weka (Witten and Frank 2011) with opensource implementations of tree algorithms such as J48 and M5P, which are the open source implementation of C4.5 and M5, respecitively (Quinlan 1992). A notable exception is the LogicReg package (Kooperberg and Ruczinski 2013) for logic regression, an algorithm for globally optimal trees based on binary covariates only and using simulated annealing. Furthermore, the GA package Scrucca (2013) provides a collection of general purpose functions, which allows the application of a wide range of genetic algorithm methods. See Hothorn (2013) for an overview of further recursive partitioning packages for R. To fill this gap, we introduce a new R package evtree, available from the Comprehensive R Archive Network at
SVR vs MLP for Phone Duration Modelling in HMMbased Speech Synthesis
, 2014
"... In this paper we investigate external phone duration models (PDMs) for improving the quality of synthetic speech in hidden Markov model (HMM)based speech synthesis. Support Vector Regression (SVR) and Multilayer Perceptron (MLP) were used for this task. SVR and MLP PDMs were compared with the expl ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we investigate external phone duration models (PDMs) for improving the quality of synthetic speech in hidden Markov model (HMM)based speech synthesis. Support Vector Regression (SVR) and Multilayer Perceptron (MLP) were used for this task. SVR and MLP PDMs were compared with the explicit duration modelling of hidden semiMarkov models (HSMMs). Experiments done on an American English database showed the SVR outperforming the MLP and HSMM duration modelling on objective and subjective evaluation. In the objective test, SVR managed to outperform MLP and HSMM models achieving 15.3 % and 25.09 % relative improvement in terms of root mean square error (RMSE) respectively. Moreover, in the subjective evaluation test, on synthesized speech, the SVR model was preferred over the MLP and HSMM models, achieving a preference score of 35.93 % and 56.30%, respectively. Index Terms: phone duration modelling, Support Vector Regression, Multilayer Perceptron, HSMM explicit duration mod