Results 1 
8 of
8
Efficient pairwise classification
 ECML 2007. LNCS (LNAI
, 2007
"... Abstract. Pairwise classification is a class binarization procedure that converts a multiclass problem into a series of twoclass problems, one problem for each pair of classes. While it can be shown that for training, this procedure is more efficient than the more commonly used oneagainstall appr ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Pairwise classification is a class binarization procedure that converts a multiclass problem into a series of twoclass problems, one problem for each pair of classes. While it can be shown that for training, this procedure is more efficient than the more commonly used oneagainstall approach, it still has to evaluate a quadratic number of classifiers when computing the predicted class for a given example. In this paper, we propose a method that allows a faster computation of the predicted class when weighted or unweighted voting are used for combining the predictions of the individual classifiers. While its worstcase complexity is still quadratic in the number of classes, we show that even in the case of completely random base classifiers, our method still outperforms the conventional pairwise classifier. For the more practical case of welltrained base classifiers, its asymptotic computational complexity seems to be almost linear. 1
Combining Predictions in Pairwise Classification: An Optimal Adaptive Voting Strategy and Its Relation to Weighted Voting
 TO APPEAR IN PATTERN RECOGNITION
, 2009
"... Weighted voting is the commonly used strategy for combining predictions in pairwise classification. Even though it shows good classification performance in practice, it is often criticized for lacking a sound theoretical justification. In this paper, we study the problem of combining predictions wit ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Weighted voting is the commonly used strategy for combining predictions in pairwise classification. Even though it shows good classification performance in practice, it is often criticized for lacking a sound theoretical justification. In this paper, we study the problem of combining predictions within a formal framework of label ranking and, under some model assumptions, derive a generalized voting strategy in which predictions are properly adapted according to the strengths of the corresponding base classifiers. We call this strategy adaptive voting and show that it is optimal in the sense of yielding a MAP prediction of the class label of a test instance. Moreover, we offer a theoretical justification for weighted voting by showing that it yields a good approximation of the optimal adaptive voting prediction. This result is further corroborated by empirical evidence from experiments with real and synthetic data sets showing that, even though adaptive voting is sometimes able to achieve consistent improvements, weighted voting is in general quite competitive, all the more in cases where the aforementioned model assumptions underlying adaptive voting are not met. In this sense, weighted voting appears to be a more robust aggregation strategy.
Maximum margin training of generative kernels
, 2004
"... Generative kernels, a generalised form of Fisher kernels, are a powerful form of kernel that allow the kernel parameters to be tuned to a specific task. The standard approach to training these kernels is to use maximum likelihood estimation. This paper describes a novel approach based on maximummar ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Generative kernels, a generalised form of Fisher kernels, are a powerful form of kernel that allow the kernel parameters to be tuned to a specific task. The standard approach to training these kernels is to use maximum likelihood estimation. This paper describes a novel approach based on maximummargin training of both the kernel parameters and a Support Vector Machine (SVM) classifier. It combines standard SVM training with a gradientdescent based kernel parameter optimisation scheme. This allows the kernel parameters to be explicitly trained for the data set and the SVM scorespace. Initial results on an artificial task and the Deterding data show that such an approach can reduce classification error rates. 1 1
A Classifier Ensemble of Binary Classifier Ensembles
"... Abstract—This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of cla ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract—This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a subspace in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal
Efficient pairwise classification and ranking
"... Pairwise classification is a class binarization procedure that converts a multiclass problem into a series of twoclass problems, one problem for each pair of classes. While it can be shown that for training, this procedure is more efficient than the more commonly used oneagainstall approach, it ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Pairwise classification is a class binarization procedure that converts a multiclass problem into a series of twoclass problems, one problem for each pair of classes. While it can be shown that for training, this procedure is more efficient than the more commonly used oneagainstall approach, it still has to evaluate a quadratic number of classifiers when computing the predicted class for a given example. In this paper, we propose a method that allows a faster computation of the predicted class when weighted or unweighted voting are used for combining the predictions of the individual classifiers. While its worstcase complexity is still quadratic in the number of classes, we show that even in the case of completely random base classifiers, our method still outperforms the conventional pairwise classifier. For the more practical case of welltrained base classifiers, its asymptotic computational complexity seems to be almost linear. We also propose a method for approximating the full class ranking, based on the Swiss System, a common scheme for conducting multiround chess tournaments. Our results indicate that this adaptive scheme offers a better tradeoff between approximation quality and number of performed comparisons than alternative, fixed schemes for ordering the evaluation of the pairwise classifiers.
Fuzzy Pairwise Multiclass Support Vector Machines
"... Abstract. At first, support vector machines (SVMs) were applied to solve binary classification problems. They can also be extended to solve multicategory problems by the combination of binary SVM classifiers. In this paper, we propose a new fuzzy model that includes the advantages of several previou ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. At first, support vector machines (SVMs) were applied to solve binary classification problems. They can also be extended to solve multicategory problems by the combination of binary SVM classifiers. In this paper, we propose a new fuzzy model that includes the advantages of several previously published methods solving their drawbacks. For each datum, a class is rejected using information provided by every decision function related to it. Our proposal yields membership degrees in the unit interval and in some cases, it improves the performance of the former methods in the unclassified regions. 1
Invariant Object Recognition using Circular Pairwise Convolutional Networks
"... Abstract. Invariant object recognition has been one of the most rewarding are of research in computer vision as there are many applications need the capability of recognizing objects of interest in various environments. However, there is no single technique which claims to achieve the goal in all po ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Invariant object recognition has been one of the most rewarding are of research in computer vision as there are many applications need the capability of recognizing objects of interest in various environments. However, there is no single technique which claims to achieve the goal in all possible conditions and domains. Out of many techniques, convolutional network has proved to be a good candidate in this area. Given large numbers of training samples of objects under various variation aspects such as lighting, pose, background, etc., convolutional network can learn to extract invariant features by itself. This comes with the price of lengthy training time. Hence, we propose a circular pairwise classification technique to shorten the training time. We compared the recognition accuracy and training time complexity between our approach and a benchmark generic object recognizer LeNet7 which is a monolithic convolutional network. 1
InvariantObjectRecognitionUsingCircular Pairwise Convolutional Networks
"... Abstract. Invariant object recognition (IOR) has been one of the most active research areas in computer vision. However, there is no technique able to achieve the best performance in all possible domains. Out of many techniques, convolutional network (CN) is proven to be a good candidate in this are ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Invariant object recognition (IOR) has been one of the most active research areas in computer vision. However, there is no technique able to achieve the best performance in all possible domains. Out of many techniques, convolutional network (CN) is proven to be a good candidate in this area. Given large numbers of training samples of objects under various variation aspects such as lighting, pose, background, etc., convolutional network can learn to extract invariant features by itself. This comes with the price of lengthy training time. Hence, we propose a circular pairwise classification technique to shorten the training time. We compared the recognition accuracy and training time complexity between our approach and a benchmark generic object recognizer LeNet7 which is a monolithic convolutional network. 1