Results 1  10
of
13
Analysis of a greedy active learning strategy
, 2005
"... We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning r ..."
Abstract

Cited by 79 (3 self)
 Add to MetaCart
(Show Context)
We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels.
Generalization Error Bounds for Collaborative Prediction with LowRank Matrices
 In Advances In Neural Information Processing Systems 17
, 2005
"... We prove generalization error bounds for predicting entries in a partially observed matrix by approximating the observed entries with a lowrank matrix. To do so, we bound the number of sign configurations of lowrank matrices using a result about realizable oriented matroids. ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
We prove generalization error bounds for predicting entries in a partially observed matrix by approximating the observed entries with a lowrank matrix. To do so, we bound the number of sign configurations of lowrank matrices using a result about realizable oriented matroids.
Semantic feedback for hybrid recommendations in Recommendz
 PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON ETECHNOLOGY, ECOMMERCE, AND ESERVICE (EEE05), HONG KONG
, 2005
"... In this paper we discuss the Recommendz recommender system. This domainindependent system combines the advantages of collaborative and contentbased filtering in a novel way. By allowing users to provide feedback not only about an item as a whole, but also properties of an item that motivated their ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
In this paper we discuss the Recommendz recommender system. This domainindependent system combines the advantages of collaborative and contentbased filtering in a novel way. By allowing users to provide feedback not only about an item as a whole, but also properties of an item that motivated their opinion, increased performance seems to be achieved. The features used to describe items are specified by the users of the system rather than predetermined using manual knowledgeengineering. We describe a method for combining descriptive features and simple ratings, and provide a performance analysis.
A General Dimension for Query Learning
, 2002
"... We introduce a new combinatorial dimension that characterizes the number of queries needed to learn, no matter what set of queries is used. This new dimension generalizes previous dimensions providing upper and lower bounds on the query complexity for all sorts of queries, and not for just examp ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We introduce a new combinatorial dimension that characterizes the number of queries needed to learn, no matter what set of queries is used. This new dimension generalizes previous dimensions providing upper and lower bounds on the query complexity for all sorts of queries, and not for just examplebased queries as in previous works. Moreover, the new characterization is not only valid for exact learning but also for approximate learning. We present several
Simultaneous learning and covering with adversarial noise
 In ICML
, 2011
"... We study simultaneous learning and covering problems: submodular set cover problems that depend on the solution to an active (query) learning problem. The goal is to jointly minimize the cost of both learning and covering. We extend recent work in this setting to allow for a limited amount of advers ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We study simultaneous learning and covering problems: submodular set cover problems that depend on the solution to an active (query) learning problem. The goal is to jointly minimize the cost of both learning and covering. We extend recent work in this setting to allow for a limited amount of adversarial noise. Certain noisy query learning problems are a special case of our problem. Crucial to our analysis is a lemma showing the logical OR of two submodular cover constraints can be reduced to a single submodular set cover constraint. Combined with known results, this new lemma allows for arbitrary monotone circuits of submodular cover constraints to be reduced to a single constraint. As an example practical application, we present a movie recommendation website that minimizes the total cost of learning what the user wants to watch and recommending a set of movies. 1. Background Consider a movie recommendation problem where we want to recommend to a user a small set of movies to watch. Assume first that we already have some model of the user’s taste in movies (for example learned from the user’s ratings history or stated genre preferences). In this case, we can pose the recommendation problem as an optimization problem: using the model, we can design an objective function F (S) which measures the quality of a set of movie recommendations S ⊆ V. Our goal is then to maximize F (S) subject to a constraint on the size or cost of S (e.g. S  ≤ k). Alternatively
Recommending informative links
 In Proceedings of the IJCAI05 Workshop on Intelligent Techniques for Web Personalization (ITWP’05
, 2005
"... The goal of one type of recommenders is to minimize the number of clicks users need to reach the information they are looking for. Recommenders typically estimate the probability that pages contain the user’s target information and provide the user with links to the most promising pages. In this pap ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The goal of one type of recommenders is to minimize the number of clicks users need to reach the information they are looking for. Recommenders typically estimate the probability that pages contain the user’s target information and provide the user with links to the most promising pages. In this paper we show that this greedy strategy can lead to recommendations that are suboptimal in terms of the number of clicks. We present a recommendation method which aims at gathering information about the user’s targets rather than guessing the target immediately. This method uses recommendations as questions and clicks on links as answers. Evaluation shows that this strategy leads to significantly shorter user sessions than the greedy strategy. 1
On User Recommendations Based on Multiple Cues
 WI/IAT 2003 Workshop on Applications, Products, and Services of Webbased Support Systems
, 2003
"... In this paper we present an overview of a recommender system that attempts to predict user preferences based on several sources including prior choices and selected userdefined features. By using a combination of collaborative filtering and semantic features, we hope to provide performance superior ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
In this paper we present an overview of a recommender system that attempts to predict user preferences based on several sources including prior choices and selected userdefined features. By using a combination of collaborative filtering and semantic features, we hope to provide performance superior to either alone. Further, our set of semantic features is acquired and updated using a learningbased procedure that avoids the need for manual knowledgeengineering. Our system is implemented in a webbased application server environment and can be used with arbitrary domains, although the test data reported here is restricted to recommendations of movies. I.
Output Divergence Criterion for Active Learning in Collaborative Settings
"... In this paper, we address the task of active learning for linear regression models in collaborative settings. The goal of active learning is to select training points that would allow accurate prediction of test output values. We propose a new active learning criterion that is aimed at directly impr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper, we address the task of active learning for linear regression models in collaborative settings. The goal of active learning is to select training points that would allow accurate prediction of test output values. We propose a new active learning criterion that is aimed at directly improving the accuracy of the output value estimation by analyzing the effect of the new training points on the estimates of the output values. The advantages of the proposed method are highlighted in collaborative settings – where most of the data points are missing, and the number of training data points is much smaller than the number of the parameters of the model. 1
A general dimension for query learning ✩
, 2006
"... www.elsevier.com/locate/jcss We introduce a combinatorial dimension that characterizes the number of queries needed to exactly (or approximately) learn concept classes in various models. Our general dimension provides tight upper and lower bounds on the query complexity for all sorts of queries, not ..."
Abstract
 Add to MetaCart
(Show Context)
www.elsevier.com/locate/jcss We introduce a combinatorial dimension that characterizes the number of queries needed to exactly (or approximately) learn concept classes in various models. Our general dimension provides tight upper and lower bounds on the query complexity for all sorts of queries, not only for examplebased queries as in previous works. As an application we show that for learning DNF formulas, unspecified attribute value membership and equivalence queries are not more powerful than standard membership and equivalence queries. Further, in the approximate learning setting, we use the general dimension to characterize the query complexity in the statistical query as well as the learning by distances model. Moreover, we derive close bounds on the number of statistical queries needed to approximately learn DNF formulas.
Active Learning and Submodular Functions
, 2012
"... Active learning is a machine learning setting where the learning algorithm decides what data is labeled. Submodular functions are a class of set functions for which many optimization problems have efficient exact or approximate algorithms. We examine their connections. • We propose a new class of in ..."
Abstract
 Add to MetaCart
Active learning is a machine learning setting where the learning algorithm decides what data is labeled. Submodular functions are a class of set functions for which many optimization problems have efficient exact or approximate algorithms. We examine their connections. • We propose a new class of interactive submodular optimization problems which connect and generalize submodular optimization and active learning over a finite query set. We derive greedy algorithms with approximately optimal worstcase cost. These analyses apply to exact learning, approximate learning, learning in the presence of adversarial noise, and applications that mix learning and covering. • We consider active learning in a batch, transductive setting where the learning algorithm selects a set of examples to be labeled at once. In this setting we derive new error bounds which use symmetric submodular functions for regularization, and we give algorithms which approximately minimize these bounds. • We consider a repeated active learning setting where the learning algorithm solves a sequence of related learning problems. We propose an approach to this problem based on a new online prediction version of submodular set cover. A common