## On bias, variance, 0/1-loss, and the curse-of-dimensionality (1997)

Venue: | Data Mining and Knowledge Discovery |

Citations: | 193 - 1 self |

### BibTeX

@ARTICLE{Friedman97onbias,,

author = {Jerome H. Friedman and Usama Fayyad},

title = {On bias, variance, 0/1-loss, and the curse-of-dimensionality},

journal = {Data Mining and Knowledge Discovery},

year = {1997},

volume = {1},

pages = {55--77}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract. The classification problem is considered in which an output variable y assumes discrete values with respective probabilities that depend upon the simultaneous values of a set of input variables x ={x1,...,xn}.At issue is how error in the estimates of these probabilities affects classification error when the estimates are used in a classification rule. These effects are seen to be somewhat counter intuitive in both their strength and nature. In particular the bias and variance components of the estimation error combine to influence classification in a very different way than with squared error on the probabilities themselves. Certain types of (very high) bias can be canceled by low variance to produce accurate classification. This can dramatically mitigate the effect of the bias associated with some simple estimators like “naive ” Bayes, and the bias induced by the curse-of-dimensionality on nearest-neighbor procedures. This helps explain why such simple methods are often competitive with and sometimes superior to more sophisticated ones for classification, and why “bagging/aggregating ” classifiers can often improve accuracy. These results also suggest simple modifications to these procedures that can (sometimes dramatically) further improve their classification performance.