## A k-nearest neighbor classification rule based on Dempster-Shafer Theory (1995)

Venue: | IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS |

Citations: | 20 - 9 self |

### BibTeX

@ARTICLE{Denœux95ak-nearest,

author = {Thierry Denœux},

title = {A k-nearest neighbor classification rule based on Dempster-Shafer Theory},

journal = {IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS},

year = {1995},

volume = {25},

pages = {804--813}

}

### Years of Citing Articles

### OpenURL

### Abstract

In this paper, the problem of classifying an unseen pattern on the basis of its nearest neighbors in a recorded data set is addressed from the point of view of Dempster-Shafer theory. Each neighbor of a sample to be classified is considered as an item of evidence that supports certain hypotheses regarding the class membership of that pattern. The degree of support is defined as a function of the distance between the two vectors. The evidence of the k nearest neighbors is then pooled by means of Dempster's rule of combination. This approach provides a global treatment of such issues as ambiguity and distance rejection, and imperfect knowledge regarding the class membership of training patterns. The effectiveness of this classification scheme as compared to the voting and distance-weighted k-NN procedures is demonstrated using several sets of simulated and real-world data.

### Citations

2249 |
A mathematical theory of evidence
- Shafer
- 1976
(Show Context)
Citation Context ...ibed, and experimental results are presented. 2 Dempster-Shafer theory Let Θ be a finite set of mutually exclusive and exhaustive hypotheses about some problem domain, called the frame of discernment =-=[19]-=-. Itisassumedthatone’stotal belief induced by a body of evidence concerning Θ can be partitioned into various portions, each one assigned to a subset of Θ. A basic probability assignment (BPA) is a fu... |

898 |
Nearest neighbor pattern classification
- Cover, Hart
- 1967
(Show Context)
Citation Context ... the voting k-nearest neighbor (k-NN) rule. According to this rule, an unclassified sample is assigned to the class represented by amajorityofitsknearest neighbors in the training set. Cover and Hart =-=[4]-=- have provided a statistical justification of this procedure by showing that, as the number N of samples and k both tend to infinity in such a manner that k/N → 0, the error rate of the k-NN rule appr... |

741 |
Aha, “UCI repository of machine learning data bases,” http: //www.ics.uci.edu/~mlearn/MLRepository.html
- Murphy, W
- 1992
(Show Context)
Citation Context ...ranging from 1 to 30. The third task investigated concerns the classification of radar returns from the ionosphere obtained by a radar system consisting of a phased array of 16 highfrequency antennas =-=[17, 20]-=-. The targets were free electrons in the ionosphere. Radar returns were manually classified as “good” or “bad” depending on whether or not they showed evidence of some type of structure in the ionosph... |

281 |
Discriminatory analysis, nonparametric discrimination
- Fix, Hodges
- 1951
(Show Context)
Citation Context ... classified samples, called the training set, and to classify each new pattern using the evidence of nearby sample observation. One such non-parametric procedure has been introduced by Fix and Hodges =-=[11]-=-, and has since become well-known in the Pattern Recognition literature as the voting k-nearest neighbor (k-NN) rule. According to this rule, an unclassified sample is assigned to the class represente... |

95 |
The distance-weighted k-nearest neighbor rule
- Dudani
- 1976
(Show Context)
Citation Context ....,d (k) the corresponding distances arranged in increasing order, it is intuitively appealing to give the label of x (i) a greater importance than to the label of x (j) whenever d (i) <d (j) . Dudani =-=[10]-=- has proposed to assign to the i-th nearest neighbor x (i) aweightw (i) defined as: w (i) = d(k) − d (i) d (k) − d (1) d (k) �= d (1) (1) = 1 d (k) = d (1) (2) The unknown pattern x is then assigned t... |

75 |
On optimum recognition error and reject tradeoff
- Chow
- 1970
(Show Context)
Citation Context ...information has been gathered in the training set, and should therefore be rejected. Dubuisson and Masson [9] call distance reject this decision, as opposed to the ambiguity reject introduced by Chow =-=[3]-=- and for which an implementation in a kNN rule has been proposed by Hellman [12]. Dasarathy [5] has proposed a k-NN rule where a distance reject option is made possible by the introduction of the conc... |

64 |
Dynamic error propagation networks
- Robinson
- 1989
(Show Context)
Citation Context ... are presented in Table 1 and Figures 5 to 8. The second data set is composed of real-world data obtained by recording examples of the eleven steady state vowels of English spoken by fifteen speakers =-=[8, 18]-=-. Words containing each of these vowels were uttered once by the fifteen speakers. Four male and four female speakers were used to build a trainig set, and the other four male and three female speaker... |

55 |
Classification of Radar Returns from the Ionosphere Using Neural Networks
- Sigillito, Wing, et al.
- 1989
(Show Context)
Citation Context ...ranging from 1 to 30. The third task investigated concerns the classification of radar returns from the ionosphere obtained by a radar system consisting of a phased array of 16 highfrequency antennas =-=[17, 20]-=-. The targets were free electrons in the ionosphere. Radar returns were manually classified as “good” or “bad” depending on whether or not they showed evidence of some type of structure in the ionosph... |

52 |
Approximations for efficient computation in the theory of evidence
- Tessem
- 1993
(Show Context)
Citation Context ...s,i are simple support functions as considered in Section 3.1, the increase is exponential is the worse general case. However, more computationally efficient approximation methods such as proposed in =-=[21]-=- could be used for very larger numbers of classes. 4 Experiments The approach described in this paper has been successfully tested on several classification problems. Before presenting the results of ... |

37 |
Nearest Neighbor Norms: NN Pattern Classification Techniques
- Dasarathy
- 1991
(Show Context)
Citation Context ...error rates than those obtained using the voting k-NN procedure for at least one particular data set. However, several researchers, repeating Dudani’s experiments, reached less optimistic conclusions =-=[1, 16, 6]-=-. In particular, Baily and Jain [1] showed that the distance-weighted k-NN rule is not necessarily better than the majority rule for small sample size if ties are broken 1sin a judicious manner. These... |

33 |
A statistical decision rule with incomplete knowledge about classes
- Dubuisson, Masson
- 1993
(Show Context)
Citation Context ...all the classes cannot be assumed to be represented in the training set, as is often the case in some application areas as target recognition in noncooperative environments [5] or diagnostic problems =-=[9]-=-. In such situations, it may be wise to consider that a point that is far away from any previously observed pattern most probably belongs to an unknown class for which no information has been gathered... |

26 |
A note on distance-weighted k-Nearest Neighbor rules
- Bailey, Jain
- 1978
(Show Context)
Citation Context ...error rates than those obtained using the voting k-NN procedure for at least one particular data set. However, several researchers, repeating Dudani’s experiments, reached less optimistic conclusions =-=[1, 16, 6]-=-. In particular, Baily and Jain [1] showed that the distance-weighted k-NN rule is not necessarily better than the majority rule for small sample size if ties are broken 1sin a judicious manner. These... |

26 |
Nosing around the neighborhood: A new system structure and classification rule for recognition in partially exposed environments
- Dasarathy
- 1980
(Show Context)
Citation Context ...ts a serious drawback when all the classes cannot be assumed to be represented in the training set, as is often the case in some application areas as target recognition in noncooperative environments =-=[5]-=- or diagnostic problems [9]. In such situations, it may be wise to consider that a point that is far away from any previously observed pattern most probably belongs to an unknown class for which no in... |

25 |
Speaker Normalisation for Automatic Speech Recognition
- Deterding
- 1989
(Show Context)
Citation Context ... are presented in Table 1 and Figures 5 to 8. The second data set is composed of real-world data obtained by recording examples of the eleven steady state vowels of English spoken by fifteen speakers =-=[8, 18]-=-. Words containing each of these vowels were uttered once by the fifteen speakers. Four male and four female speakers were used to build a trainig set, and the other four male and three female speaker... |

25 |
The nearest neighbor classification rule with a reject option
- Hellman
- 1970
(Show Context)
Citation Context ...cted. Dubuisson and Masson [9] call distance reject this decision, as opposed to the ambiguity reject introduced by Chow [3] and for which an implementation in a kNN rule has been proposed by Hellman =-=[12]-=-. Dasarathy [5] has proposed a k-NN rule where a distance reject option is made possible by the introduction of the concept of an acceptable neighbor, defined as a neighbor whose distance to the patte... |

17 |
A re-examination of the distanceweighted k-nearest neighbor classification rule
- MacLeod, Luk, et al.
- 1987
(Show Context)
Citation Context ...nner. These authors also showed that, in the infinite sample case (N →∞), the error rate of the traditional unweighted k-NN rule is better than that of any weighted k-NN rule. However, Macleod et al. =-=[15]-=- presented arguments showing that this conclusion may not apply if the training set is finite. They also proposed a simple extension of Dudani’s rule allowing for a more effective use of the kth neigh... |

14 |
A reappraisal of distance-weighted k-nearest neighbor classification for pattern recognition with missing data
- Morin, Raeside
- 1981
(Show Context)
Citation Context ...error rates than those obtained using the voting k-NN procedure for at least one particular data set. However, several researchers, repeating Dudani’s experiments, reached less optimistic conclusions =-=[1, 16, 6]-=-. In particular, Baily and Jain [1] showed that the distance-weighted k-NN rule is not necessarily better than the majority rule for small sample size if ties are broken 1sin a judicious manner. These... |

11 |
Decision making with imprecise probabilities: Dempster-shafer theory and applications. Water Resources Research 28
- CASELTON, LUO
- 1992
(Show Context)
Citation Context ...ifBel1 is vacuous, then Bel1 ⊕ Bel2 = Bel2; if Bel1 is Bayesian, and if Bel1 ⊕ Bel2 exists, then it is also Bayesian. The D-S formalism must also be considered in the perspective of decision analysis =-=[2]-=-. As explained above, under D-S theory, a body of evidence about some set of hypotheses Θ does not in general provide a unique probability distribution, but only asetofcompatible probabilities bounded... |

6 |
A learning scheme for a fuzzy k-nn rule
- Jozwik
- 1983
(Show Context)
Citation Context ...presentative of the clusters [14]. Fuzzy sets theory offers a convenient formalism for handling imprecision and uncertainty in a decision process, and several fuzzy k-NN procedures have been proposed =-=[13, 14]-=-. In this approach, the degree of membership of a training vector x to each of M classes is specified by a number ui, with the following properties: ui ∈ [0, 1] (3) M� ui = 1 (4) i=1 The membership co... |

6 |
A fuzzy k-NN neighbor algorithm
- Keller, Gray, et al.
- 1985
(Show Context)
Citation Context ...ome degree of “typicality” depending on their distance to class centers, and that atypical vectors should be given less weight in the decision than those that are truly representative of the clusters =-=[14]-=-. Fuzzy sets theory offers a convenient formalism for handling imprecision and uncertainty in a decision process, and several fuzzy k-NN procedures have been proposed [13, 14]. In this approach, the d... |