Results 11  20
of
102
Beyond traditional kernels: Classification in two dissimilaritybased representation spaces
 IEEE Trans. Syst., Man Cybern., Part C: Appl. Rev
, 2008
"... Abstract—Proximity captures the degree of similarity between examples and is thereby fundamental in learning. Learning from pairwise proximity data usually relies on either kernel methods for specifically designed kernels or the nearest neighbor (NN) rule. Kernel methods are powerful, but often cann ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Proximity captures the degree of similarity between examples and is thereby fundamental in learning. Learning from pairwise proximity data usually relies on either kernel methods for specifically designed kernels or the nearest neighbor (NN) rule. Kernel methods are powerful, but often cannot handle arbitrary proximities without necessary corrections. The NN rule can work well in such cases, but suffers from local decisions. The aim of this paper is to provide an indispensable explanation and insights about two simple yet powerful alternatives when neither conventional kernel methods nor the NN rule can perform best. These strategies use two proximitybased representation spaces (RSs) in which accurate classifiers are trained on all training objects and demand comparisons to a small set of prototypes. They can handle all meaningful dissimilarity measures, including nonEuclidean and nonmetric ones. Practical examples illustrate that these RSs can be highly advantageous in supervised learning. Simple classifiers built there tend to outperform the NN rule. Moreover, computational complexity may be controlled. Consequently, these approaches offer an appealing alternative to learn from proximity data for which kernel methods cannot directly be applied, are too costly or impractical, while the NN rule leads to noisy results. Index Terms—Classifier design and evaluation, indefinite kernels, similarity measures, statistical learning. I.
Large margin nearest neighbor classifiers
 IEEE Transactions on Neural Networks
, 2005
"... Abstract—The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of d ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The employment of a locally adaptive metric becomes crucial in order to keep class conditional probabilities close to uniform, thereby minimizing the bias of estimates. We propose a technique that computes a locally flexible metric by means of support vector machines (SVMs). The decision function constructed by SVMs is used to determine the most discriminant direction in a neighborhood around the query. Such a direction provides a local feature weighting scheme. We formally show that our method increases the margin in the weighted space where classification takes place. Moreover, our method has the important advantage of online computational efficiency over competing locally adaptive techniques for nearest neighbor classification. We demonstrate the efficacy of our method using both real and simulated data. Index Terms—Feature relevance, margin, nearest neighbor classification, support vector machines (SVMs). I.
IKNN: Informative knearest neighbor pattern classification
 in: 11th European Conference on Principles and Practice of Knowledge Discovery in Databases
, 2007
"... Abstract. The Knearest neighbor (KNN) decision rule has been a ubiquitous classification tool with good scalability. Past experience has shown that the optimal choice of K depends upon the data, making it laborious to tune the parameter for different applications. We introduce a new metric that mea ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The Knearest neighbor (KNN) decision rule has been a ubiquitous classification tool with good scalability. Past experience has shown that the optimal choice of K depends upon the data, making it laborious to tune the parameter for different applications. We introduce a new metric that measures the informativeness of objects to be classified. When applied as a querybased distance metric to measure the closeness between objects, two novel KNN procedures, Locally InformativeKNN (LIKNN) and Globally InformativeKNN (GIKNN), are proposed. By selecting a subset of most informative objects from neighborhoods, our methods exhibit stability to the change of input parameters, number of neighbors(K) and informative points (I). Experiments on UCI benchmark data and diverse realworld data sets indicate that our approaches are applicationindependent and can generally outperform several popular KNN extensions, as well as SVM and Boosting methods. 1
Nonparametric scene parsing with adaptive feature relevance and semantic context
 in CVPR
, 2013
"... context ..."
(Show Context)
Boosting nearest neighbor classifiers for multiclass recognition
 In Computer Vision and Pattern RecognitionWorkshops, 2005. CVPR Workshops. IEEE Computer Society Conference on
, 2005
"... ..."
Principal Component Analysis Based Feature Extraction, Morphological Edge Detection and Localization for Fast Iris Recognition
 Journal of Computer Science
"... Abstract: This study involves the Iris Localization based on morphological or set theory which is well in shape detection. Principal Component Analysis (PCA) is used for preprocessing, in which the removal of redundant and unwanted data is done. Applications such as Median Filtering and Adaptive thr ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Abstract: This study involves the Iris Localization based on morphological or set theory which is well in shape detection. Principal Component Analysis (PCA) is used for preprocessing, in which the removal of redundant and unwanted data is done. Applications such as Median Filtering and Adaptive thresholding are used for handling the variations in lighting and noise. Features are extracted using Wavelet Packet Transform (WPT). Finally matching is performed using KNN. The proposed method is better than the previous method and is proved by the results of different parameters. The testing of the proposed algorithm was done using CASIA iris database (V1.0) and (V3.0).
A New Method of Learning Weighted Similarity Function to Improve Predictions of Nearest
, 2008
"... classifier is highly dependant on the distance (or similarity) function used to find the NN of an input test pattern. In order to optimize the accuracy of the NN rule, a weighted similarity function is proposed. In this scheme, a weight is assigned to each training instance. The weights of training ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
classifier is highly dependant on the distance (or similarity) function used to find the NN of an input test pattern. In order to optimize the accuracy of the NN rule, a weighted similarity function is proposed. In this scheme, a weight is assigned to each training instance. The weights of training instances are used in the generalization phase to find the NN of an input test pattern. To specify the weights of training instances, we propose a learning algorithm that attempts to minimize the leaveoneout (LV1) error rate of the classifier on train data. The proposed approach is assessed using a number of data sets from UCI corpora. Simulation results show that the proposed method improves the generalization accuracy of the basic NN and results are comparable to or better than other methods proposed in the past to learn the distance function. Index Terms — nearest neighbor, weighted metrics, adaptive distance measure. I.
An adaptive nearest neighbor classification algorithm for data streams
 In PKDD
, 2005
"... Abstract. In this paper, we propose an incremental classification algorithm which uses a multiresolution data representation to find adaptive nearest neighbors of a test point. The algorithm achieves excellent performance by using small classifier ensembles where approximation error bounds are guar ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we propose an incremental classification algorithm which uses a multiresolution data representation to find adaptive nearest neighbors of a test point. The algorithm achieves excellent performance by using small classifier ensembles where approximation error bounds are guaranteed for each ensemble size. The very low update cost of our incremental classifier makes it highly suitable for data stream applications. Tests performed on both synthetic and reallife data indicate that our new classifier outperforms existing algorithms for data streams in terms of accuracy and computational costs. 1
Class Conditional Nearest Neighbor and Large Margin Instance Selection
, 2010
"... The one nearest neighbor (1NN) rule uses instance proximity followed by class labeling information for classifying new instances. This paper presents a framework for studying properties of the training set related to proximity and labeling information, in order to improve the performance of the 1N ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The one nearest neighbor (1NN) rule uses instance proximity followed by class labeling information for classifying new instances. This paper presents a framework for studying properties of the training set related to proximity and labeling information, in order to improve the performance of the 1NN rule. To this aim, a socalled class conditional nearest neighbor (c.c.n.n.) relation is introduced, consisting of those pairs of training instances (a, b) such that b is the nearest neighbor of a among those instances (excluded a) in one of the classes of the training set. A graphbased representation of c.c.n.n. is used for a comparative analysis of c.c.n.n. and of other interesting proximitybased concepts. In particular, a scoring function on instances is introduced, which measures the effect of removing one instance on the hypothesismargin of other instances. This scoring function is employed to develop an effective large margin instance selection algorithm, which is empirically demonstrated to improve storage and accuracy performance of the 1NN rule on artificial and reallife data sets.
Adaptive local linear regression with application to printer color management
 IEEE Trans. on Image Processing
"... Abstract—Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global “optimal ” value, chosen by cross validation. This paper proposes adapt ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Abstract—Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global “optimal ” value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing kNN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in