Results 1  10
of
10
AlmostEverywhere Algorithmic Stability and Generalization Error
 In UAI2002: Uncertainty in Artificial Intelligence
, 2002
"... We introduce a new notion of algorithmic stability, which we call training stability. ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
(Show Context)
We introduce a new notion of algorithmic stability, which we call training stability.
Feature selection with ensembles, artificial variables, and redundancy elimination
 JMLR
, 2009
"... Predictive models benefit from a compact, nonredundant subset of features that improves interpretability and generalization. Modern data sets are wide, dirty, mixed with both numerical and categorical predictors, and may contain interactive effects that require complex models. This is a challenge f ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Predictive models benefit from a compact, nonredundant subset of features that improves interpretability and generalization. Modern data sets are wide, dirty, mixed with both numerical and categorical predictors, and may contain interactive effects that require complex models. This is a challenge for filters, wrappers, and embedded feature selection methods. We describe details of an algorithm using treebased ensembles to generate a compact subset of nonredundant features. Parallel and serial ensembles of trees are combined into a mixed method that can uncover masking and detect features of secondary effect. Simulated and actual examples illustrate the effectiveness of the approach.
Demonstrating the Stability of Support Vector Machines for Classification
, 2005
"... In this paper, we deal with the stability of support vector machines (SVMs) in classification tasks. We decompose the average prediction error of support vector machines into the bias and the variance terms, and we define the aggregation effect. By estimating the aforementioned terms with bootstrap ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we deal with the stability of support vector machines (SVMs) in classification tasks. We decompose the average prediction error of support vector machines into the bias and the variance terms, and we define the aggregation effect. By estimating the aforementioned terms with bootstrap smoothing techniques, we demonstrate that support vector machines are stable classifiers. To investigate the stability of the SVM several experiments were conducted. The first experiment deals with face detection. The second experiment conducted is related to the binary classification of three artificially generated data sets stemming from known distributions and an additional synthetic data set known as “Waveform”. Finally, in order to support our claim on the stability of SVMs, two more binary classification experiments were carried out on the “Pime Indian Diabetes ” and the “Wisconsin Breast Cancer ” data sets. In general, bagging is not expected to improve the classification accuracy of SVMs.
Bagging with asymmetric costs for misclassified and
"... correctly classified examples ⋆ ..."
(Show Context)
Local Negative Correlation with Resampling ⋆
"... Abstract. This paper deals with a learning algorithm which combines two well known methods to generate ensemble diversity error negative correlation and resampling. In this algorithm, a set of learners iteratively and synchronously improve their state considering information about the performance o ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper deals with a learning algorithm which combines two well known methods to generate ensemble diversity error negative correlation and resampling. In this algorithm, a set of learners iteratively and synchronously improve their state considering information about the performance of a fixed number of other learners in the ensemble, to generate a sort of local negative correlation. Resampling allows the base algorithm controls the impact of highly influential data points which in turns can improve its generalization error. The resulting algorithm can be viewed as a generalization of bagging, where each learner no longer is independent but can be locally coupled with other learners. We will demonstrate our technique on two real data sets using neural networks ensembles. 1
Two bagging algorithms with coupled learners to encourage diversity
"... Abstract. In this paper, we present two ensemble learning algorithms which make use of boostrapping and outofbag estimation in an attempt to inherit the robustness of bagging to overfitting. As against bagging, with these algorithms learners have visibility on the other learners and cooperate to g ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper, we present two ensemble learning algorithms which make use of boostrapping and outofbag estimation in an attempt to inherit the robustness of bagging to overfitting. As against bagging, with these algorithms learners have visibility on the other learners and cooperate to get diversity, a characteristic that has proved to be an issue of major concern to ensemble models. Experiments are provided using two regression problems obtained from UCI.
Extralabel information: experiments with viewbased classification
"... Abstract. Extra information is often readily available but not utilized in a classification paradigm. Here we explore using extra labels (profile faces and rotated faces) to aid in distinguishing faces versus nonfaces. We propose a way to combine simple discriminant classifiers to build a more comp ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Extra information is often readily available but not utilized in a classification paradigm. Here we explore using extra labels (profile faces and rotated faces) to aid in distinguishing faces versus nonfaces. We propose a way to combine simple discriminant classifiers to build a more complex ones and justify the combination in a probabilistic setting. 1
Journal of Machine Learning Research 6 (2004) ?? Submitted 4/04; Published ??/04 Managing Diversity In Regression Ensembles
 Journal of Machine Learning Research
, 2005
"... We describe the results of a study on a heuristic technique that claimed to e#ectively balance diversity against individual accuracy between members of a neural network regression ensemble. We formalise this technique, providing a statistical interpretation of its success. ..."
Abstract
 Add to MetaCart
We describe the results of a study on a heuristic technique that claimed to e#ectively balance diversity against individual accuracy between members of a neural network regression ensemble. We formalise this technique, providing a statistical interpretation of its success.