Results 1  10
of
8,294
Online Rank Elicitation for PlackettLuce: A Dueling Bandits Approach
"... We study the problem of online rank elicitation, assuming that rankings of a set of alternatives obey the PlackettLuce distribution. Following the setting of the dueling bandits problem, the learner is allowed to query pairwise comparisons between alternatives, i.e., to sample pairwise marginals of ..."
Abstract
 Add to MetaCart
We study the problem of online rank elicitation, assuming that rankings of a set of alternatives obey the PlackettLuce distribution. Following the setting of the dueling bandits problem, the learner is allowed to query pairwise comparisons between alternatives, i.e., to sample pairwise marginals
Powerlaw distributions in empirical data
 ISSN 00361445. doi: 10.1137/ 070710111. URL http://dx.doi.org/10.1137/070710111
, 2009
"... Powerlaw distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and manmade phenomena. Unfortunately, the empirical detection and characterization of power laws is made difficult by the large fluctuations that occur in the t ..."
Abstract

Cited by 589 (7 self)
 Add to MetaCart
Powerlaw distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and manmade phenomena. Unfortunately, the empirical detection and characterization of power laws is made difficult by the large fluctuations that occur in the tail of the distribution. In particular, standard methods such as leastsquares fitting are known to produce systematically biased estimates of parameters for powerlaw distributions and should not be used in most circumstances. Here we describe statistical techniques for making accurate parameter estimates for powerlaw data, based on maximum likelihood methods and the KolmogorovSmirnov statistic. We also show how to tell whether the data follow a powerlaw distribution at all, defining quantitative measures that indicate when the power law is a reasonable fit to the data and when it is not. We demonstrate these methods by applying them to twentyfour realworld data sets from a range of different disciplines. Each of the data sets has been conjectured previously to follow a powerlaw distribution. In some cases we find these conjectures to be consistent with the data while in others the power law is ruled out.
Homo sacer: sovereign power and bare life
, 1998
"... Homo Sacer. Sovereign Power and Bare Life was originally published as Homo sacer. Il potere sovrano e la nuda vita, © 1995 Giulio Einaudi editore s.p.a. ..."
Abstract

Cited by 285 (0 self)
 Add to MetaCart
Homo Sacer. Sovereign Power and Bare Life was originally published as Homo sacer. Il potere sovrano e la nuda vita, © 1995 Giulio Einaudi editore s.p.a.
On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts  Towards Memetic Algorithms
, 1989
"... Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that ..."
Abstract

Cited by 241 (10 self)
 Add to MetaCart
Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming
PACBayesian Analysis of Contextual Bandits
"... We derive an instantaneous (perround) datadependent regret bound for stochastic multiarmed bandits with side information (also known as contextual bandits). pThe scaling of our regret bound with the number of states (contexts) N goes as NI⇢t (S; A), where I⇢t (S; A) is the mutual information betwe ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We derive an instantaneous (perround) datadependent regret bound for stochastic multiarmed bandits with side information (also known as contextual bandits). pThe scaling of our regret bound with the number of states (contexts) N goes as NI⇢t (S; A), where I⇢t (S; A) is the mutual information
Information Retrieval Interaction
, 1992
"... this document, text or image about?' Gradually moving from the left to the right in Figure 3.1, different understandings of this concept evolve ..."
Abstract

Cited by 242 (8 self)
 Add to MetaCart
this document, text or image about?' Gradually moving from the left to the right in Figure 3.1, different understandings of this concept evolve
PAC bounds for multiarmed bandit and Markov decision processes
 In Fifteenth Annual Conference on Computational Learning Theory (COLT
, 2002
"... Abstract. The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O ( n ɛ2 log 1) times to find an ɛoptimal δ arm with probability of at least 1 − δ. This is in contrast to the naive bound of O ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Abstract. The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O ( n ɛ2 log 1) times to find an ɛoptimal δ arm with probability of at least 1 − δ. This is in contrast to the naive bound
Results 1  10
of
8,294