## Estimating the Posterior Probabilities Using the K-Nearest Neighbor Rule (2004)

Citations: | 2 - 0 self |

### BibTeX

@MISC{Atiya04estimatingthe,

author = {Amir F. Atiya},

title = {Estimating the Posterior Probabilities Using the K-Nearest Neighbor Rule},

year = {2004}

}

### OpenURL

### Abstract

In many pattern classification problems an estimate of the posterior prob-abilities (rather than only a classification) is required. This is usually the case when some confidence measure in the classification is needed. In this paper we propose a new posterior probability estimator. The proposed estimator considers the K-nearest neighbors. It attaches a weight to each neighbor that contributes in an additive fashion to the posterior probability estimate. The weights corresponding to the K-nearest-neighbors (which add to 1) are esti-mated from the data using a maximum likelihood approach. Simulation studies confirm the effectiveness of the proposed estimator. 1

### Citations

2036 | Online learning with kernels
- Kivinen, Smola, et al.
- 2001
(Show Context)
Citation Context ...enough N). (It is also well-known that under certain conditions maximizing the likelihood function asymptotically leads to an efficient estimator, that is an estimator with the smallest variance, see =-=[19]-=-, but we will not get into technical details of the asymptotic performance.) 2) If the procedure is intended for the purpose of estimating a more accurate estimate of P(correct), as detailed in the in... |

1001 |
A Probabilistic Theory of Pattern Recognition
- Devroye, Györfi, et al.
- 1996
(Show Context)
Citation Context ...ferently in the posterior probability computation. Weighted nearest-neighbor classifiers have been considered in the past (see Royall [18], Bailey and Jain [2], Toussaint’s review [20], Devroye et al =-=[4]-=-, and Dudani [5]), but these studies focus on classification not on posterior probability estimation. Also the approach presented here is different and is based on estimating the weights by a maximum ... |

671 | Statistical pattern recognition: A review
- Jain, Duin, et al.
- 2000
(Show Context)
Citation Context ...eave-one-out-estimator and bootstrap approaches. This is because the posterior probability estimator has a much lower variance than all these other methods (see the reviews and analyses of [9], [11], =-=[12]-=-, [14], [15], [16], and [17]). In this paper we propose a new posterior probability estimate based on observing the K-nearest neighbors. The K-nearest-neighbor method has had a long history in the pat... |

31 |
Bankruptcy Prediction for Credit Risk Using Neural Networks: A Survey and New Results
- Atiya, member, et al.
(Show Context)
Citation Context ...iven pattern. An example where this fact is true is the bankruptcy prediction application, whereby one predicts whether a company will default on a loan based on the health of its financial variables =-=[1]-=-. A default/non-default classification of the companies is desirable but not sufficient. The posterior probabilities will allow banks to compute their expected loss on their loan portfolios. The other... |

28 |
Additive estimators for probabilities of correct classification
- Glick
- 1978
(Show Context)
Citation Context ...ator, the leave-one-out-estimator and bootstrap approaches. This is because the posterior probability estimator has a much lower variance than all these other methods (see the reviews and analyses of =-=[9]-=-, [11], [12], [14], [15], [16], and [17]). In this paper we propose a new posterior probability estimate based on observing the K-nearest neighbors. The K-nearest-neighbor method has had a long histor... |

26 |
A note on distance-weighted k-Nearest Neighbor rules
- Bailey, Jain
- 1978
(Show Context)
Citation Context ...ator that weighs the different neighbors differently in the posterior probability computation. Weighted nearest-neighbor classifiers have been considered in the past (see Royall [18], Bailey and Jain =-=[2]-=-, Toussaint’s review [20], Devroye et al [4], and Dudani [5]), but these studies focus on classification not on posterior probability estimation. Also the approach presented here is different and is b... |

22 | Proximity graphs for nearest neighbor decision rules: recent progress
- Toussaint
- 2002
(Show Context)
Citation Context ...ferent neighbors differently in the posterior probability computation. Weighted nearest-neighbor classifiers have been considered in the past (see Royall [18], Bailey and Jain [2], Toussaint’s review =-=[20]-=-, Devroye et al [4], and Dudani [5]), but these studies focus on classification not on posterior probability estimation. Also the approach presented here is different and is based on estimating the we... |

20 |
Recent advances in error rate estimation
- Hand
- 1986
(Show Context)
Citation Context ... the leave-one-out-estimator and bootstrap approaches. This is because the posterior probability estimator has a much lower variance than all these other methods (see the reviews and analyses of [9], =-=[11]-=-, [12], [14], [15], [16], and [17]). In this paper we propose a new posterior probability estimate based on observing the K-nearest neighbors. The K-nearest-neighbor method has had a long history in t... |

16 | Learning Pattern Classification - A Survey
- Kulkarni, Venkatesh
- 1998
(Show Context)
Citation Context ...ator and bootstrap approaches. This is because the posterior probability estimator has a much lower variance than all these other methods (see the reviews and analyses of [9], [11], [12], [14], [15], =-=[16]-=-, and [17]). In this paper we propose a new posterior probability estimate based on observing the K-nearest neighbors. The K-nearest-neighbor method has had a long history in the pattern classificatio... |

15 |
Patterns in pattern recognition: 1968-1974
- Kanal
- 1974
(Show Context)
Citation Context ...s posterior probability is defined as P(Cm|x). A well-known estimate of the posterior probabilities, based on the K-nearest-neighbor classifier is the following (see Fukunaga et als[6], [8] and Kanal =-=[13]-=-). Let Km be the number of patterns among the K nearest neighbors (to point x) that belong to class Cm. Then the estimate is given by ˆP(Cm|x) = Km K Rather than treating every pattern among the K nea... |

12 |
Estimation Of Classification Error
- Fukunaga, Kessell
(Show Context)
Citation Context ....e. the classification accuracy) of a specific classifier, one way is to count the number of training set patterns classified correctly. A much more accurate approach (proposed by Fukunaga et al [6], =-=[7]-=-, and [8]) is the posterior probability approach, whereby one computes P(correct|x) for each pattern in the training set (using the posterior probabilities or their statistical estimates) and average ... |

10 |
Nonparametric Bayes Error Estimation Using Unclassified Samples
- Fukunaga, Kessell
- 1973
(Show Context)
Citation Context ...lassification accuracy) of a specific classifier, one way is to count the number of training set patterns classified correctly. A much more accurate approach (proposed by Fukunaga et al [6], [7], and =-=[8]-=-) is the posterior probability approach, whereby one computes P(correct|x) for each pattern in the training set (using the posterior probabilities or their statistical estimates) and average all these... |

6 |
k-nearest-neighbor Bayesrisk estimation
- Fukunaga, Hostetler
- 1975
(Show Context)
Citation Context ...ce (i.e. the classification accuracy) of a specific classifier, one way is to count the number of training set patterns classified correctly. A much more accurate approach (proposed by Fukunaga et al =-=[6]-=-, [7], and [8]) is the posterior probability approach, whereby one computes P(correct|x) for each pattern in the training set (using the posterior probabilities or their statistical estimates) and ave... |

4 |
Adaptive soft k-nearest-neighbour classifiers.” Pattern Recognition
- Bermejo, Cabestany
- 2000
(Show Context)
Citation Context ...s focus on classification not on posterior probability estimation. Also the approach presented here is different and is based on estimating the weights by a maximum likelihood approach. Bermejo et al =-=[3]-=- propose a new classification method whereby they fit a mixture-of-Gaussians to the immediate K-nearest-neighbors. A posterior probability estimate is also obtained this way. Other, non-K-nearest-neig... |

2 |
The distance weighted k-nearest-neighbor rule
- Dudani
- 1976
(Show Context)
Citation Context ...posterior probability computation. Weighted nearest-neighbor classifiers have been considered in the past (see Royall [18], Bailey and Jain [2], Toussaint’s review [20], Devroye et al [4], and Dudani =-=[5]-=-), but these studies focus on classification not on posterior probability estimation. Also the approach presented here is different and is based on estimating the weights by a maximum likelihood appro... |

2 |
Local estimation of posterior class probabilities to minimize classification errors
- Guerrero-Curieses, Cid-Sueiro, et al.
- 2004
(Show Context)
Citation Context ...est-neighbors. A posterior probability estimate is also obtained this way. Other, non-K-nearest-neighbor methods for estimating posterior probabilities have also been developed (see Guerrero-Curieses =-=[10]-=- and the references therein). In the developed approach we propose a different approach from those discussed. Our method produces different weights for the K-nearest neighbors for each different probl... |

2 |
An e$cient estimator of pattern recognition system error probability. Pattern Recognition
- Kittler, PA
- 1981
(Show Context)
Citation Context ...ne-out-estimator and bootstrap approaches. This is because the posterior probability estimator has a much lower variance than all these other methods (see the reviews and analyses of [9], [11], [12], =-=[14]-=-, [15], [16], and [17]). In this paper we propose a new posterior probability estimate based on observing the K-nearest neighbors. The K-nearest-neighbor method has had a long history in the pattern c... |

2 |
On the Posterior-Probability Estimate of the Error Rate of Nonparametric Classification Rules
- Lugosi, Pawlak
- 1994
(Show Context)
Citation Context ...ootstrap approaches. This is because the posterior probability estimator has a much lower variance than all these other methods (see the reviews and analyses of [9], [11], [12], [14], [15], [16], and =-=[17]-=-). In this paper we propose a new posterior probability estimate based on observing the K-nearest neighbors. The K-nearest-neighbor method has had a long history in the pattern classification field an... |

1 |
The robust estimation of classification error rates
- Kroke
- 1986
(Show Context)
Citation Context ...-estimator and bootstrap approaches. This is because the posterior probability estimator has a much lower variance than all these other methods (see the reviews and analyses of [9], [11], [12], [14], =-=[15]-=-, [16], and [17]). In this paper we propose a new posterior probability estimate based on observing the K-nearest neighbors. The K-nearest-neighbor method has had a long history in the pattern classif... |

1 |
A class of nonparametric estimators ofa smooth regression function
- Royall
- 1966
(Show Context)
Citation Context ...er we propose an estimator that weighs the different neighbors differently in the posterior probability computation. Weighted nearest-neighbor classifiers have been considered in the past (see Royall =-=[18]-=-, Bailey and Jain [2], Toussaint’s review [20], Devroye et al [4], and Dudani [5]), but these studies focus on classification not on posterior probability estimation. Also the approach presented here ... |