Results 1  10
of
91
MultiArmed Bandits in Metric Spaces
 STOC'08
, 2008
"... In a multiarmed bandit problem, an online algorithm chooses from a set of strategies in a sequence of n trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with larg ..."
Abstract

Cited by 63 (8 self)
 Add to MetaCart
In a multiarmed bandit problem, an online algorithm chooses from a set of strategies in a sequence of n trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multiarmed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the Lipschitz MAB problem. We present a complete solution for the multiarmed problem in this setting. That is, for every metric space (L, X) we define an isometry invariant MaxMinCOV(X) which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.
Pure exploration in multiarmed bandits problems
 IN PROCEEDINGS OF THE TWENTIETH INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY (ALT 2009
, 2009
"... We consider the framework of stochastic multiarmed bandit problems and study the possibilities and limitations of strategies that explore sequentially the arms. The strategies are assessed not in terms of their cumulative regrets, as is usually the case, but through quantities referred to as simpl ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
We consider the framework of stochastic multiarmed bandit problems and study the possibilities and limitations of strategies that explore sequentially the arms. The strategies are assessed not in terms of their cumulative regrets, as is usually the case, but through quantities referred to as simple regrets. The latter are related to the (expected) gains of the decisions that the strategies would recommend for a new oneshot instance of the same multiarmed bandit problem. Here, exploration is only constrained by the number of available rounds (not necessarily known in advance), in contrast to the case when cumulative regrets are considered and when exploitation needs to be performed at the same time. We start by indicating the links between simple and cumulative regrets. A small cumulative regret entails a small simple regret but too small a cumulative regret prevents the simple regret from decreasing exponentially towards zero, its optimal distributiondependent rate. We therefore introduce specific strategies, for which we prove both distributiondependent and distributionfree bounds. A concluding experimental study puts these theoretical bounds in perspective and shows the interest of nonuniform exploration of the arms.
An Adaptive Algorithm for Selecting Profitable Keywords for SearchBased Advertising Services
 In EC ’06: Proceedings of the 7th ACM conference on Electronic commerce
, 2006
"... Increases in online searches have spurred the growth of searchbased advertising services offered by search engines, enabling companies to promote their products to consumers based on search queries. With millions of available keywords whose clickthru rates and profits are highly uncertain, identify ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
Increases in online searches have spurred the growth of searchbased advertising services offered by search engines, enabling companies to promote their products to consumers based on search queries. With millions of available keywords whose clickthru rates and profits are highly uncertain, identifying the most profitable set of keywords becomes challenging. We formulate a stylized model of keyword selection in searchbased advertising services. Assuming known profits and unknown clickthru rates, we develop an approximate adaptive algorithm that prioritizes keywords based on a prefix ordering – sorting of keywords in a descending order of expectedprofittocost ratio (or “bangperbuck”). We show that the average expected profit generated by our algorithm converges to nearoptimal profits, with the convergence rate that is independent of the number of keywords and scales gracefully with the problem’s parameters. By leveraging the special structure of our problem, our algorithm trades off bias with faster convergence rate, converging very quickly but with only nearoptimal profit in the limit. Extensive numerical simulations show that when the number of keywords is large, our algorithm outperforms existing methods, increasing profits by about 20 % in as little as 40 periods. We also extend our algorithm to the setting when both the clickthru rates and the expected profits are unknown. 1
Efficient bandit algorithms for online multiclass prediction
 ICML, volume 307 of ACM International Conference Proceeding Series
, 2008
"... This paper introduces the Banditron, a variant of the Perceptron [Rosenblatt, 1958], for the multiclass bandit setting. The multiclass bandit setting models a wide range of practical supervised learning applications where the learner only receives partial feedback (referred to as “bandit ” feedba ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
This paper introduces the Banditron, a variant of the Perceptron [Rosenblatt, 1958], for the multiclass bandit setting. The multiclass bandit setting models a wide range of practical supervised learning applications where the learner only receives partial feedback (referred to as “bandit ” feedback, in the spirit of multiarmed bandit models) with respect to the true label (e.g. in many web applications users often only provide positive “click ” feedback which does not necessarily fully disclose a true label). The Banditron has the ability to learn in a multiclass classification setting with the “bandit ” feedback which only reveals whether or not the prediction made by the algorithm was correct or not (but does not necessarily reveal the true label). We provide (relative) mistake bounds which show how the Banditron enjoys favorable performance, and our experiments demonstrate the practicality of the algorithm. Furthermore, this paper pays close attention to the important special case when the data is linearly separable — a problem which has been exhaustively studied in the full information setting yet is novel in the bandit setting. 1.
Interactively Optimizing Information Retrieval Systems as a Dueling Bandits Problem
"... We present an online learning framework tailored towards realtime learning from observed user behavior in search engines and other information retrieval systems. In particular, we only require pairwise comparisons which were shown to be reliably inferred from implicit feedback (Joachims et al., 20 ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
(Show Context)
We present an online learning framework tailored towards realtime learning from observed user behavior in search engines and other information retrieval systems. In particular, we only require pairwise comparisons which were shown to be reliably inferred from implicit feedback (Joachims et al., 2007; Radlinski et al., 2008b). We will present an algorithm with theoretical guarantees as well as simulation results. 1.
Algorithms for Infinitely ManyArmed Bandits
"... We consider multiarmed bandit problems where the number of arms is larger than the possible number of experiments. We make a stochastic assumption on the meanreward of a new selected arm which characterizes its probability of being a nearoptimal arm. Our assumption is weaker than in previous work ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
(Show Context)
We consider multiarmed bandit problems where the number of arms is larger than the possible number of experiments. We make a stochastic assumption on the meanreward of a new selected arm which characterizes its probability of being a nearoptimal arm. Our assumption is weaker than in previous works. We describe algorithms based on upperconfidencebounds applied to a restricted set of randomly selected arms and provide upperbounds on the resulting expected regret. We also derive a lowerbound which matches (up to a logarithmic factor) the upperbound in some cases. 1
Online optimization in Xarmed bandits
 In Advances in Neural Information Processing Systems 22
, 2008
"... We consider a generalization of stochastic bandit problems where the set of arms, X, is allowed to be a generic topological space and the meanpayoff function is “locally Lipschitz” with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
(Show Context)
We consider a generalization of stochastic bandit problems where the set of arms, X, is allowed to be a generic topological space and the meanpayoff function is “locally Lipschitz” with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm selection policy whose regret improves upon previous results for a large class of problems. In particular, our results imply that if X is the unit hypercube in a Euclidean space and the meanpayoff function has a finite number of global maxima around which the behavior of the function is locally Hölder with a known exponent, then the expected regret is bounded up to a logarithmic factor by √ n, i.e., the rate of the growth of the regret is independent of the dimension of the space. We also prove the minimax optimality of our algorithm for the class of problems considered. 1 Introduction and
Online Linear Optimization and Adaptive Routing
, 2006
"... This paper studies an online linear optimization problem generalizing the multiarmed bandit problem. Motivated primarily by the task of designing adaptive routing algorithms for overlay networks, we present two randomized online algorithms for selecting a sequence of routing paths in a network with ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
This paper studies an online linear optimization problem generalizing the multiarmed bandit problem. Motivated primarily by the task of designing adaptive routing algorithms for overlay networks, we present two randomized online algorithms for selecting a sequence of routing paths in a network with unknown edge delays varying adversarially over time. In contrast with earlier work on this problem, we assume that the only feedback after choosing such a path is the total endtoend delay of the selected path. We present two algorithms whose regret is sublinear in the number of trials and polynomial in the size of the network. The first of these algorithms generalizes to solve any online linear optimization problem, given an oracle for optimizing linear functions over the set of strategies; our work may thus be interpreted as a generalpurpose reduction from offline to online linear optimization. A key element of this algorithm is the notion of a barycentric spanner, a special type of basis for the vector space of strategies which allows any feasible strategy to be expressed as a linear combination of basis vectors using bounded coefficients. We also present a second algorithm for the online shortest path problem, which solves the problem using a chain of online decision oracles, one at each node of the graph. This has several advantages over the online linear optimization approach. First, it is effective against an adaptive adversary, whereas our linear optimization algorithm assumes an oblivious adversary. Second, even in the case of an oblivious adversary, the second algorithm performs slightly better than the first, as measured by their additive regret.
Improved Rates for the Stochastic ContinuumArmed Bandit Problem
 In 20th Conference on Learning Theory (COLT
, 2007
"... Abstract. Considering onedimensional continuumarmed bandit problems, we propose an improvement of an algorithm of Kleinberg and a new set of conditions which give rise to improved rates. In particular, we introduce a novel assumption that is complementary to the previous smoothness conditions, whi ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Considering onedimensional continuumarmed bandit problems, we propose an improvement of an algorithm of Kleinberg and a new set of conditions which give rise to improved rates. In particular, we introduce a novel assumption that is complementary to the previous smoothness conditions, while at the same time smoothness of the mean payoff function is required only at the maxima. Under these new assumptions new bounds on the expected regret are derived. In particular, we show that apart from logarithmic factors, the expected regret scales with the squareroot of the number of trials, provided that the mean payoff function has finitely many maxima and its second derivatives are continuous and nonvanishing at the maxima. This improves a previous result of Cope by weakening the assumptions on the function. We also derive matching lower bounds. To complement the bounds on the expected regret, we provide high probability bounds which exhibit similar scaling. 1
Contextual Bandits with Similarity Information
 24TH ANNUAL CONFERENCE ON LEARNING THEORY
, 2011
"... In a multiarmed bandit (MAB) problem, an online algorithm makes a sequence of choices. In each round it chooses from a timeinvariant set of alternatives and receives the payoff associated with this alternative. While the case of small strategy sets is by now wellunderstood, a lot of recent work ha ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
In a multiarmed bandit (MAB) problem, an online algorithm makes a sequence of choices. In each round it chooses from a timeinvariant set of alternatives and receives the payoff associated with this alternative. While the case of small strategy sets is by now wellunderstood, a lot of recent work has focused on MAB problems with exponentially or infinitely large strategy sets, where one needs to assume extra structure in order to make the problem tractable. In particular, recent literature considered information on similarity between arms. We consider similarity information in the setting of contextual bandits, a natural extension of the basic MAB problem where before each round an algorithm is given the context – a hint about the payoffs in this round. Contextual bandits are directly motivated by placing advertisements on webpages, one of the crucial problems in sponsored search. A particularly simple way to represent similarity information in the contextual bandit setting is via a similarity distance between the contextarm pairs which bounds from above the difference between the respective expected payoffs. Prior work