Results 1  10
of
18
A new understanding of prediction markets via noregret learning
 In ACM EC
, 2010
"... We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and noregret learning. We first show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and noregret learning. We first show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from expert advice by equating the set of outcomes on which bets are placed in the market with the set of experts in the learning setting, and equating trades made in the market with losses observed by the learning algorithm. If the loss of the market organizer is bounded, this bound can be used to derive an O ( √ T) regret bound for the corresponding learning algorithm. We then show that the class of markets with convex cost functions exactly corresponds to the class of Follow the Regularized Leader learning algorithms, with the choice of a cost function in the market corresponding to the choice of a regularizer in the learning problem. Finally, we show an equivalence between market scoring rules and prediction markets with convex cost functions. This implies both that any market scoring rule can be implemented as a cost function based market maker, and that market scoring rules can be interpreted naturally as Follow the Regularized Leader algorithms. These connections provide new insight into how it is that commonly studied markets, such as the Logarithmic Market Scoring Rule, can aggregate opinions into accurate estimates of the likelihood of future events.
Selffinanced wagering mechanisms for forecasting
 EC
"... We examine a class of wagering mechanisms designed to elicit truthful predictions from a group of people without requiring any outside subsidy. We propose a number of desirable properties for wagering mechanisms, identifying one mechanism—weightedscore wagering—that satisfies all of the properties. ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We examine a class of wagering mechanisms designed to elicit truthful predictions from a group of people without requiring any outside subsidy. We propose a number of desirable properties for wagering mechanisms, identifying one mechanism—weightedscore wagering—that satisfies all of the properties. Moreover, we show that a singleparameter generalization of weightedscore wagering is the only mechanism that satisfies these properties. We explore some variants of the core mechanism based on practical considerations. Categories and Subject Descriptors
Composite Binary Losses
, 2009
"... We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitl ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduce an intrinsic parametrisation of composite binary losses and give a complete characterisation of the relationship between proper losses and “classification calibrated ” losses. We also consider the question of the “best ” surrogate binary loss. We introduce a precise notion of “best ” and show there exist situations where two convex surrogate losses are incommensurable. We provide a complete explicit characterisation of the convexity of composite binary losses in terms of the link function and the weight function associated with the proper loss which make up the composite loss. This characterisation suggests new ways of “surrogate tuning”. Finally, in an appendix we present some new algorithmindependent results on the relationship between properness, convexity and robustness to misclassification noise for binary losses and show that all convex proper losses are nonrobust to misclassification noise. 1
Automated MarketMaking in the Large: The Gates Hillman Prediction Market
"... We designed and built the Gates Hillman Prediction Market (GHPM) to predict the opening day of the Gates and Hillman Centers, the new computer science buildings at Carnegie Mellon University. The market ran for almost a year and attracted 169 active traders who placed almost 40,000 bets with an auto ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We designed and built the Gates Hillman Prediction Market (GHPM) to predict the opening day of the Gates and Hillman Centers, the new computer science buildings at Carnegie Mellon University. The market ran for almost a year and attracted 169 active traders who placed almost 40,000 bets with an automated market maker. Ranging over 365 possible opening days, the market’s event partition size is the largest ever elicited in any prediction market by an order of magnitude. A market of this size required new advances, including a novel spanbased elicitation interface. The results of the GHPM are important for two reasons. First, we uncovered two flaws of current automated market makers: spikiness and liquidityinsensitivity, and we develop the mathematical underpinnings of these flaws. Second, the market provides a valuable corpus of identitylinked trades. We use this data set to explore whether the market reacted to or anticipated official communications, how selfreported trader confidence had little relation to actual performance, and how trade frequencies suggest a power law distribution. Most significantly, the data enabled us to evaluate two competing hypotheses about how markets aggregate information, the Marginal Trader Hypothesis and the Hayek Hypothesis; the data strongly support the former.
Surrogate Regret Bounds for Proper Losses
"... We present tight surrogate regret bounds for the class of proper (i.e., Fisher consistent) losses. The bounds generalise the marginbased bounds due to Bartlett et al. (2006). The proof uses Taylor’s theorem and leads to new representations for loss and regret and a simple proof of the integral repr ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We present tight surrogate regret bounds for the class of proper (i.e., Fisher consistent) losses. The bounds generalise the marginbased bounds due to Bartlett et al. (2006). The proof uses Taylor’s theorem and leads to new representations for loss and regret and a simple proof of the integral representation of proper losses. We also present a different formulation of a duality result of Bregman divergences which leads to a simple demonstration of the convexity of composite losses using canonical link functions. 1.
Efficient market making via convex optimization, and a connection to online learning
 ACM Transactions on Economics and Computation. To Appear
, 2012
"... We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any market satisfying a set of intuitive conditions must price securities via a convex cost function, which is constructed via conjugate duality. Rather than deal with an exponentially large or infinite outcome space directly, our framework only requires optimization over a convex hull. By reducing the problem of automated market making to convex optimization, where many efficient algorithms exist, we arrive at a range of new polynomialtime pricing mechanisms for various problems. We demonstrate the advantages of this framework with the design of some particular markets. We also show that by relaxing the convex hull we can gain computational tractability without compromising the market institution’s bounded budget. Although our framework was designed with the goal of deriving efficient automated market makers for markets with very large outcome spaces, this framework also provides new insights into the relationship between market design and machine learning, and into the complete market setting. Using our framework, we illustrate the mathematical parallels between cost function based markets and online learning and establish a correspondence between cost function based markets and market scoring rules for complete markets. 1
Composition of Markets with Conflicting Incentives
"... We study information revelation in scoring rule and prediction market mechanisms in settings in which traders have conflicting incentives due to opportunities to profit from the market operator’s subsequent actions. In our canonical model, an agent Alice is offered an incentivecompatible scoring ru ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study information revelation in scoring rule and prediction market mechanisms in settings in which traders have conflicting incentives due to opportunities to profit from the market operator’s subsequent actions. In our canonical model, an agent Alice is offered an incentivecompatible scoring rule to reveal her beliefs about a future event, but can also profit from misleading another trader Bob about her information and then making money off Bob’s error in a subsequent market. We show that, in any weak Perfect Bayesian Equilibrium of this sequence of two markets, Alice and Bob earn payoffs that are consistent with a minimax strategy of a related game. We can then characterize the equilibria in terms of an information channel: the outcome of the first scoring rule is as if Alice had only observed a noisy version of her initial signal, with the degree of noise indicating the adverse effect of the second market on the first. We provide a partial constructive characterization of when this channel will be noiseless. We show that our results on the canonical model yield insights into other settings of information extraction with conflicting incentives.
Eliciting Truthful Answers to MultipleChoice Questions Preliminary Report
"... Motivated by the prevalence of online questionnaires in electronic commerce, and of multiplechoice questions in such questionnaires, we consider the problem of eliciting truthful answers to multiplechoice questions from a knowledgeable respondent. Specifically, each question is a statement regardi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Motivated by the prevalence of online questionnaires in electronic commerce, and of multiplechoice questions in such questionnaires, we consider the problem of eliciting truthful answers to multiplechoice questions from a knowledgeable respondent. Specifically, each question is a statement regarding an uncertain future event, and is multiplechoice – the responder must select exactly one of the given answers. The principal offers a payment, whose amount is a function of the answer selected and the true outcome (which the principal will eventually observe). This problem significantly generalizes recent work on truthful elicitation of distribution properties, which itself generalized a long line of work in elicitation of complete distributions. We provide necessary and sufficient conditions for the existence of payments that induce truthful answers, and give a characterization of those payments. We also study in greater details the common case of questions with ordinal answers, and illustrate our results with several examples of practical interest.
Truthful Surveys
"... Abstract. We consider the problem of truthfully sampling opinions of a population for statistical analysis purposes, such as estimating the population distribution of opinions. To obtain accurate results, the surveyor must incentivize individuals to report unbiased opinions. We present a rewarding s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. We consider the problem of truthfully sampling opinions of a population for statistical analysis purposes, such as estimating the population distribution of opinions. To obtain accurate results, the surveyor must incentivize individuals to report unbiased opinions. We present a rewarding scheme to elicit opinions that are representative of the population. In contrast with the related literature, we do not assume a specific information structure. In particular, our method does not rely on a common prior assumption. 1
Only valuable experts can be valued
 In Proceedings of the 12th ACM conference on Electronic commerce
, 2011
"... Suppose a principal Alice wishes to reduce her uncertainty regarding some future payoff. Consider a selfproclaimed expert Bob that may either be an informed expert knowing an exact (or approximate) distribution of a future random outcome that may affect Alice’s utility, or an uninformed expert who ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Suppose a principal Alice wishes to reduce her uncertainty regarding some future payoff. Consider a selfproclaimed expert Bob that may either be an informed expert knowing an exact (or approximate) distribution of a future random outcome that may affect Alice’s utility, or an uninformed expert who knows nothing more than Alice does. Alice would like to hire Bob and solicit his signal. Her goal is to incentivize an informed expert to accept the contract and reveal his knowledge while deterring an uninformed expert from accepting the contract altogether. The starting point of this work is a powerful negative result (Olszewski and Sandroni, 2007), which tells us that in the general case for any contract which guarantees an informed expert some positive payoff an uninformed expert (with no extra knowledge) has a strategy which guarantees him a positive payoff as well. At the face of this negative result, we reexamine the notion of an expert and conclude that knowing some hidden variable (i.e., the description of the aforementioned distribution), does not make Bob an expert, or at least not a “valuable expert”. The premise of our paper is that if Alice only tries to incentivize experts which are valuable to her decision making then she can indeed screen them from uninformed experts. On a more technical level, we consider the case where Bob’s signal about the distribution of a future event cannot be an arbitrary distribution but rather comes from some subset P of all possible distributions. We give rather tight conditions on P (which relate to its convexity), under which screening is possible. We formalize our intuition that if these conditions are not met then an expert is not guaranteed to be valuable. We give natural and arguably useful scenarios where indeed such a restriction on the distribution arise. 1