Results 1  10
of
53
Mining multilabel data
 In Data Mining and Knowledge Discovery Handbook
, 2010
"... A large body of research in supervised learning deals with the analysis of singlelabel data, where training examples are associated with a single label λ from a set of disjoint labels L. However, training examples in several application domains are often associated with a set of labels Y ⊆ L. Such d ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
A large body of research in supervised learning deals with the analysis of singlelabel data, where training examples are associated with a single label λ from a set of disjoint labels L. However, training examples in several application domains are often associated with a set of labels Y ⊆ L. Such data are called multilabel.
Budgeted Social Choice: From Consensus to Personalized Decision Making
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... We develop a general framework for social choice problems in which a limited number of alternatives can be recommended to an agent population. In our budgeted social choice model, this limit is determined by a budget, capturing problems that arise naturally in a variety of contexts, and spanning the ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We develop a general framework for social choice problems in which a limited number of alternatives can be recommended to an agent population. In our budgeted social choice model, this limit is determined by a budget, capturing problems that arise naturally in a variety of contexts, and spanning the continuum from pure consensus decision making (i.e., standard social choice) to fully personalized recommendation. Our approach applies a form of segmentation to social choice problems— requiring the selection of diverse options tailored to different agent types—and generalizes certain multiwinner election schemes. We show that standard rank aggregation methods perform poorly, and that optimization in our model is NPcomplete; but we develop fast greedy algorithms with some theoretical guarantees. Experiments on realworld datasets demonstrate the effectiveness of our algorithms.
Label Ranking Algorithms: A Survey
"... Abstract. Label ranking is a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes several supervised learning problems such as multiclass prediction, multilabel classification and ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. Label ranking is a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes several supervised learning problems such as multiclass prediction, multilabel classification and hierarchical classification. Unsurpisingly, there exists a plethora of label ranking algorithms in the literature due, in part, to this versatile nature of the problem. In this paper, we survey these algorithms. 1
The Unavailable Candidate Model: A DecisionTheoretic View of Social Choice
"... One of the fundamental problems in the theory of social choice is aggregating the rankings of a set of agents (or voters) into a consensus ranking. Rank aggregation has found application in a variety of computational contexts. However, the goal of constructing a consensus ranking rather than, say, a ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
One of the fundamental problems in the theory of social choice is aggregating the rankings of a set of agents (or voters) into a consensus ranking. Rank aggregation has found application in a variety of computational contexts. However, the goal of constructing a consensus ranking rather than, say, a single outcome (or winner) is often left unjustified, calling into question the suitability of classical rank aggregation methods. We introduce a novel model which offers a decisiontheoretic motivation for constructing a consensus ranking. Our unavailable candidate model assumes that a consensus choice must be made, but that candidates may become unavailable after voters express their preferences. Roughly speaking, a consensus ranking serves as a compact, easily communicable representation of a decision policy that can be used to make choices in the face of uncertain candidate availability. We use this model to define a principled aggregation method that minimizes expected voter dissatisfaction with the chosen candidate. We give exact and approximation algorithms for computing optimal rankings and provide computational evidence for the effectiveness of a simple greedy scheme. We also describe strong connections to popular voting protocols such as the plurality rule and the Kemeny consensus, showing specifically that Kemeny produces optimal rankings in the unavailable candidate model under certain conditions.
PreferenceBased Policy Iteration: Leveraging Preference Learning for Reinforcement Learning
"... Abstract. This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a “preferencebased” approach to reinforcement learning is a possible extension of the type of feedback an agen ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract. This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a “preferencebased” approach to reinforcement learning is a possible extension of the type of feedback an agent may learn from. In particular, while conventional RL methods are essentially confined to deal with numerical rewards, there are many applications in which this type of information is not naturally available, and in which only qualitative reward signals are provided instead. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. Concretely, in this paper, we build on an existing method for approximate policy iteration based on rollouts. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of our preferencebased policy iteration method are illustrated by means of two case studies. 1
Predicting partial orders: ranking with abstention
 in: Machine Learning and Knowledge Discovery in Databases
, 2010
"... Abstract. The prediction of structured outputs in general and rankings in particular has attracted considerable attention in machine learning in recent years, and different types of ranking problems have already been studied. In this paper, we propose a generalization or, say, relaxation of the stan ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. The prediction of structured outputs in general and rankings in particular has attracted considerable attention in machine learning in recent years, and different types of ranking problems have already been studied. In this paper, we propose a generalization or, say, relaxation of the standard setting, allowing a model to make predictions in the form of partial instead of total orders. We interpret such kind of prediction as a ranking with partial abstention: If the model is not sufficiently certain regarding the relative order of two alternatives and, therefore, cannot reliably decide whether the former should precede the latter or the other way around, it may abstain from this decision and instead declare these alternatives as being incomparable. We propose a general approach to ranking with partial abstention as well as evaluation metrics for measuring the correctness and completeness of predictions. For two types of ranking problems, we show experimentally that this approach is able to achieve a reasonable tradeoff between these two criteria. 1
A literature survey on algorithms for multilabel learning
, 2010
"... Multilabel Learning is a form of supervised learning where the classification algorithm is required to learn from a set of instances, each instance can belong to multiple classes and so after be able to predict a set of class labels for a new instance. This is a generalized version of most popular ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Multilabel Learning is a form of supervised learning where the classification algorithm is required to learn from a set of instances, each instance can belong to multiple classes and so after be able to predict a set of class labels for a new instance. This is a generalized version of most popular multiclass problems where each instances is restricted to have only one class label. There exists a wide range of applications for multilabelled predictions, such as text categorization, semantic image labeling, gene functionality classification etc. and the scope and interest is increasing with modern applications. This survey paper introduces the task of multilabel prediction (classification), presents the sparse literature in this area in an organized manner, discusses different evaluation metrics and performs a comparative analysis of the existing algorithms. This paper also relates multilabel problems with similar but different problems that are often reduced to multilabel problems to have access to wide range of multilabel algorithms. 1
Efficient Prediction Algorithms for Binary Decomposition Techniques
"... Binary decomposition methods transform multiclass learning problems into a series of twoclass learning problems that can be solved with simpler learning algorithms. As the number of such binary learning problems often grows superlinearly with the number of classes, we need efficient methods for c ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Binary decomposition methods transform multiclass learning problems into a series of twoclass learning problems that can be solved with simpler learning algorithms. As the number of such binary learning problems often grows superlinearly with the number of classes, we need efficient methods for computing the predictions. In this paper, we discuss an efficient algorithm that queries only a dynamically determined subset of the trained classifiers, but still predicts the same classes that would have been predicted if all classifiers had been queried. The algorithm is first derived for the simple case of pairwise classification, and then generalized to arbitrary pairwise decompositions of the learning problem in the form of ternary errorcorrecting output codes under a variety of different code designs and decoding strategies.
Active Learning Ranking from Pairwise Preferences with Almost Optimal Query Complexity
"... Given a set V of n elements we wish to linearly order them using pairwise preference labels which may be nontransitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Given a set V of n elements we wish to linearly order them using pairwise preference labels which may be nontransitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most O(n poly(log n, ε−1)) preference labels for a regret of ε times the optimal loss. This is strictly better, and often significantly better than what nonadaptive sampling could achieve. Our main result helps settle an open problem posed by learningtorank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels? 1