Results 11 - 20
of
69
Graded Multilabel Classification: The Ordinal Case
"... We propose a generalization of multilabel classification that we refer to as graded multilabel classification. The key idea is that, instead of requesting a yes-no answer to the question of class membership or, say, relevance of a class label for an instance, we allow for a graded membership of an i ..."
Abstract
-
Cited by 11 (2 self)
- Add to MetaCart
We propose a generalization of multilabel classification that we refer to as graded multilabel classification. The key idea is that, instead of requesting a yes-no answer to the question of class membership or, say, relevance of a class label for an instance, we allow for a graded membership of an instance, measured on an ordinal scale of membership degrees. This extension is motivated by practical applications in which a graded or partial class membership is natural. Apart from introducing the basic setting, we propose two general strategies for reducing graded multilabel problems to conventional (multilabel) classification problems. Moreover, we address the question of how to extend performance metrics commonly used in multilabel classification to the graded setting, and present first experimental results. 1.
Creative Learning
- in School with LEGO Programmable Robotics Products.” In Proceedings to Frontiers in Education'99, IEEE CS
, 1999
"... graph structure for multi-label image classification via clique generation ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
graph structure for multi-label image classification via clique generation
Multi-Label Classification with Label Constraints
"... Abstract. We extend the multi-label classification setting with constraints on labels. This leads to two new machine learning tasks: First, the label constraints must be properly integrated into the classification process to improve its performance and second, we can try to automatically derive usef ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
(Show Context)
Abstract. We extend the multi-label classification setting with constraints on labels. This leads to two new machine learning tasks: First, the label constraints must be properly integrated into the classification process to improve its performance and second, we can try to automatically derive useful constraints from data. In this paper, we experiment with two constraint-based correction approaches as post-processing step within the ranking by pairwise comparison (RPC)-framework. In addition, association rule learning is considered for the task of label constraints learning. We report on the current status of our work, together with evaluations on synthetic datasets and two real-world datasets. 1
LIFT: Multi-Label Learning with Label-Specific Features
- PROCEEDINGS OF THE TWENTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
"... Multi-label learning deals with the problem where each training example is represented by a single instance while associated with asetofclass labels. For an unseen example, existing approaches choose to determine the membership of each possible class labeltoitbasedonidentical feature set, i.e. the v ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
Multi-label learning deals with the problem where each training example is represented by a single instance while associated with asetofclass labels. For an unseen example, existing approaches choose to determine the membership of each possible class labeltoitbasedonidentical feature set, i.e. the very instance representation of the unseen example is employed in the discrimination processes of all labels. However, this commonly-used strategy might be suboptimal as different class labels usually carry specific characteristics of their own, and it could be beneficial to exploit different feature sets for the discrimination of different labels. Based on the above reflection, we propose a new strategy to multi-label learning by leveraging labelspecific features, where a simple yet effective algorithm named LIFT is presented. Briefly, LIFT constructs features specific to each label by conducting clustering analysis on its positive and negative instances, and then performs training and testing by querying the clustering results. Extensive experiments across sixteen diversified data sets clearly validate the superiority of LIFT against other wellestablished multi-label learning algorithms.
Multi-Label Learning with PRO Loss
"... Multi-label learning methods assign multiple labels to one object. In practice, in addition to differentiating relevant labels from irrelevant ones, it is often desired to rank the relevant labels for an object, whereas the rankings of irrelevant labels are not important. Such a requirement, however ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
(Show Context)
Multi-label learning methods assign multiple labels to one object. In practice, in addition to differentiating relevant labels from irrelevant ones, it is often desired to rank the relevant labels for an object, whereas the rankings of irrelevant labels are not important. Such a requirement, however, cannot be met because most existing methods were designed to optimize existing criteria, yet there is no criterion which encodes the aforementioned requirement. In this paper, we present a new criterion, PRO LOSS, concerning the prediction on all labels as well as the rankings of only relevant labels. We then propose ProSVM which optimizes PRO LOSS efficiently using alternating direction method of multipliers. We further improve its efficiency with an upper approximation that reduces the number of constraints from O(T 2) to O(T), where T is the number of labels. Experiments show that our proposals are not only superior on PRO LOSS, but also highly competitive on existing evaluation criteria.
A literature survey on algorithms for multi-label learning
, 2010
"... Multi-label Learning is a form of supervised learning where the classification algorithm is required to learn from a set of instances, each instance can belong to multiple classes and so after be able to predict a set of class labels for a new instance. This is a generalized version of most popular ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Multi-label Learning is a form of supervised learning where the classification algorithm is required to learn from a set of instances, each instance can belong to multiple classes and so after be able to predict a set of class labels for a new instance. This is a generalized version of most popular multi-class problems where each instances is restricted to have only one class label. There exists a wide range of applications for multi-labelled predictions, such as text categorization, semantic image labeling, gene functionality classification etc. and the scope and interest is increasing with modern applications. This survey paper introduces the task of multi-label prediction (classification), presents the sparse literature in this area in an organized manner, discusses different evaluation metrics and performs a comparative analysis of the existing algorithms. This paper also relates multi-label problems with similar but different problems that are often reduced to multi-label problems to have access to wide range of multi-label algorithms. 1
Large-scale multi-label text classification-revisiting neural networks. arXiv preprint arXiv:1312.5419
, 2013
"... Abstract. Neural networks have recently been proposed for multi-label classi-fication because they are able to capture and model label dependencies in the output layer. In this work, we investigate limitations of BP-MLL, a neural net-work (NN) architecture that aims at minimizing pairwise ranking er ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Neural networks have recently been proposed for multi-label classi-fication because they are able to capture and model label dependencies in the output layer. In this work, we investigate limitations of BP-MLL, a neural net-work (NN) architecture that aims at minimizing pairwise ranking error. Instead, we propose to use a comparably simple NN approach with recently proposed learning techniques for large-scale multi-label text classification tasks. In partic-ular, we show that BP-MLL’s ranking loss minimization can be efficiently and effectively replaced with the commonly used cross entropy error function, and demonstrate that several advances in neural network training that have been de-veloped in the realm of deep learning can be effectively employed in this setting. Our experimental results show that simple NN models equipped with advanced techniques such as rectified linear units, dropout, and AdaGrad perform as well as or even outperform state-of-the-art approaches on six large-scale textual datasets with diverse characteristics. 1
A Composite Likelihood View for Multi-Label Classification
"... Given limited training samples, learning to classify multiple labels is challenging. Problem decomposition [24] is widely used in this case, where the original problem is decomposed into a set of easier-to-learn subproblems, and predictions from subproblems are combined to make the final decision. I ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Given limited training samples, learning to classify multiple labels is challenging. Problem decomposition [24] is widely used in this case, where the original problem is decomposed into a set of easier-to-learn subproblems, and predictions from subproblems are combined to make the final decision. In this paper we show the connection between composite likelihoods [17] and many multilabel decomposition methods, e.g., one-vs-all, one-vs-one, calibrated label ranking, probabilistic classifier chain. This connection holds promise for improving problem decomposition in both the choice of subproblems and the combination of subproblem decisions. As an attempt to exploit this connection, we design a composite marginal method that improves pairwise decomposition. Pairwise label comparisons, which seem to be a natural choice for subproblems, are replaced by bivariate label densities, which are more informative and natural components in a composite likelihood. For combining subproblem decisions, we propose a new mean-field approximation that minimizes the notion of composite divergence and is potentially more robust to inaccurate estimations in subproblems. Empirical studies on five data sets show that, given limited training samples, the proposed method outperforms many alternatives. 1
Hc-search for multi-label prediction: An empirical study
- In Proceedings of AAAI Conference on Artificial Intelligence (AAAI
, 2014
"... Abstract Multi-label learning concerns learning multiple, overlapping, and correlated classes. In this paper, we adapt a recent structured prediction framework called HCSearch for multi-label prediction problems. One of the main advantages of this framework is that its training is sensitive to the ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
Abstract Multi-label learning concerns learning multiple, overlapping, and correlated classes. In this paper, we adapt a recent structured prediction framework called HCSearch for multi-label prediction problems. One of the main advantages of this framework is that its training is sensitive to the loss function, unlike the other multilabel approaches that either assume a specific loss function or require a manual adaptation to each loss function. We empirically evaluate our instantiation of the HC-Search framework along with many existing multilabel learning algorithms on a variety of benchmarks by employing diverse task loss functions. Our results demonstrate that the performance of existing algorithms tends to be very similar in most cases, and that the HCSearch approach is comparable and often better than all the other algorithms across different loss functions.