Results 1 
7 of
7
An extension on ―statistical comparisons of classifiers over multiple data sets‖ for all pairwise comparisons
 Journal of Machine Learning Research
"... In a recently published paper in JMLR, Demˇsar (2006) recommends a set of nonparametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
In a recently published paper in JMLR, Demˇsar (2006) recommends a set of nonparametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic procedures and some of the most advanced ones when comparing a control method. However, it does not deal with some advanced topics in depth. Regarding these topics, we focus on more powerful proposals of statistical procedures for comparing n×n classifiers. Moreover, we illustrate an easy way of obtaining adjusted and comparable pvalues in multiple comparison procedures.
Anytime induction of lowcost, lowerror classifiers: a samplingbased approach
, 2008
"... Machine learning techniques are gaining prevalence in the production of a wide range of classifiers for complex realworld applications with nonuniform testing and misclassification costs. The increasing complexity of these applications poses a real challenge to resource management during learning a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Machine learning techniques are gaining prevalence in the production of a wide range of classifiers for complex realworld applications with nonuniform testing and misclassification costs. The increasing complexity of these applications poses a real challenge to resource management during learning and classification. In this work we introduce ACT (anytime costsensitive tree learner), a novel framework for operating in such complex environments. ACT is an anytime algorithm that allows learning time to be increased in return for lower classification costs. It builds a tree topdown and exploits additional time resources to obtain better estimations for the utility of the different candidate splits. Using sampling techniques, ACT approximates the cost of the subtree under each candidate split and favors the one with a minimal cost. As a stochastic algorithm, ACT is expected to be able to escape local minima, into which greedy methods may be trapped. Experiments with a variety of datasets were conducted to compare ACT to the stateoftheart costsensitive tree learners. The results show that for the majority of domains ACT produces significantly less costly trees. ACT also exhibits good anytime behavior with diminishing returns. 1.
Anytime learning of anycost classifiers
"... The classification of new cases using a predictive model incurs two types of costs—testing costs and misclassification costs. Recent research efforts have resulted in several novel algorithms that attempt to produce learners that simultaneously minimize both types. In many real life scenarios, howe ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The classification of new cases using a predictive model incurs two types of costs—testing costs and misclassification costs. Recent research efforts have resulted in several novel algorithms that attempt to produce learners that simultaneously minimize both types. In many real life scenarios, however, we cannot afford to conduct all the tests required by the predictive model. For example, a medical center might have a fixed predetermined budget for diagnosing each patient. For cost bounded classification, decision trees are considered attractive as they measure only the tests along a single path. In this work we present an anytime framework for producing decisiontree based classifiers that can make accurate decisions within a strict bound on testing costs. These bounds can be known to the learner, known to the classifier but not to the learner, or not predetermined. Extensive experiments with a variety of datasets show that our proposed framework produces trees with lower misclassification costs along a wide range of testing cost bounds.
Learning in a fixed or evolving network of agents
"... This paper investigates incremental multiagent learning in static or evolving structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper investigates incremental multiagent learning in static or evolving structured networks. Learning examples are incrementally distributed among the agents, and the objective is to build a common hypothesis that is consistent with all the examples present in the system, despite communication constraints. Recently, a first mechanism was proposed to deal with static networks, but its accuracy was reduced in some topologies. We propose here several possible improvements of this mechanism, whose different behaviors with respect to some efficiency requirements (redundancy, computational cost and communicational cost) are experimentally investigated. Then, we provide an experimental analysis of some variants for evolving networks. 1.
STATISTICAL PARAMETRIC SPEECH SYNTHESIS USING DEEP NEURAL NETWORKS
"... Conventional approaches to statistical parametric speech synthesis typically use decision treeclustered contextdependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their ..."
Abstract
 Add to MetaCart
Conventional approaches to statistical parametric speech synthesis typically use decision treeclustered contextdependent hidden Markov models (HMMs) to represent probability densities of speech parameters given texts. Speech parameters are generated from the probability densities to maximize their output probabilities, then a speech waveform is reconstructed from the generated parameters. This approach is reasonably effective but has a couple of limitations, e.g. decision trees are inefficient to model complex context dependencies. This paper examines an alternative scheme that is based on a deep neural network (DNN). The relationship between input texts and their acoustic realizations is modeled by a DNN. The use of the DNN can address some limitations of the conventional approach. Experimental results show that the DNNbased systems outperformed the HMMbased systems with similar numbers of parameters. Index Terms — Statistical parametric speech synthesis; Hidden Markov model; Deep neural network;
RESEARCH ARTICLE Open Access Research article
"... search for DRGsystems applied to Austrian healthdata ..."