Results 1  10
of
24
A new understanding of prediction markets via noregret learning
 In ACM EC
, 2010
"... We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and noregret learning. We first show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and noregret learning. We first show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from expert advice by equating the set of outcomes on which bets are placed in the market with the set of experts in the learning setting, and equating trades made in the market with losses observed by the learning algorithm. If the loss of the market organizer is bounded, this bound can be used to derive an O ( √ T) regret bound for the corresponding learning algorithm. We then show that the class of markets with convex cost functions exactly corresponds to the class of Follow the Regularized Leader learning algorithms, with the choice of a cost function in the market corresponding to the choice of a regularizer in the learning problem. Finally, we show an equivalence between market scoring rules and prediction markets with convex cost functions. This implies both that any market scoring rule can be implemented as a cost function based market maker, and that market scoring rules can be interpreted naturally as Follow the Regularized Leader algorithms. These connections provide new insight into how it is that commonly studied markets, such as the Logarithmic Market Scoring Rule, can aggregate opinions into accurate estimates of the likelihood of future events.
Adapting to a Market Shock: Optimal Sequential MarketMaking
"... We study the profitmaximization problem of a monopolistic marketmaker who sets twosided prices in an asset market. The sequential decision problem is hard to solve because the state space is a function. We demonstrate that the belief state is well approximated by a Gaussian distribution. We prove ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
We study the profitmaximization problem of a monopolistic marketmaker who sets twosided prices in an asset market. The sequential decision problem is hard to solve because the state space is a function. We demonstrate that the belief state is well approximated by a Gaussian distribution. We prove a key monotonicity property of the Gaussian state update which makes the problem tractable, yielding the first optimal sequential marketmaking algorithm in an established model. The algorithm leads to a surprising insight: an optimal monopolist can provide more liquidity than perfectly competitive marketmakers in periods of extreme uncertainty, because a monopolist is willing to absorb initial losses in order to learn a new valuation rapidly so she can extract higher profits later. 1
An OptimizationBased Framework for Automated MarketMaking
 EC'11
, 2011
"... We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any market satisfying a set of intuitive conditions must price securities via a convex cost function, which is constructed via conjugate duality. Rather than deal with an exponentially large or infinite outcome space directly, our framework only requires optimization over a convex hull. By reducing the problem of automated market making to convex optimization, where many efficient algorithms exist, we arrive at a range of new polynomialtime pricing mechanisms for various problems. We demonstrate the advantages of this framework with the design of some particular markets. We also show that by relaxing the convex hull we can gain computational tractability without compromising the market institution’s bounded budget.
Automated MarketMaking in the Large: The Gates Hillman Prediction Market
"... We designed and built the Gates Hillman Prediction Market (GHPM) to predict the opening day of the Gates and Hillman Centers, the new computer science buildings at Carnegie Mellon University. The market ran for almost a year and attracted 169 active traders who placed almost 40,000 bets with an auto ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We designed and built the Gates Hillman Prediction Market (GHPM) to predict the opening day of the Gates and Hillman Centers, the new computer science buildings at Carnegie Mellon University. The market ran for almost a year and attracted 169 active traders who placed almost 40,000 bets with an automated market maker. Ranging over 365 possible opening days, the market’s event partition size is the largest ever elicited in any prediction market by an order of magnitude. A market of this size required new advances, including a novel spanbased elicitation interface. The results of the GHPM are important for two reasons. First, we uncovered two flaws of current automated market makers: spikiness and liquidityinsensitivity, and we develop the mathematical underpinnings of these flaws. Second, the market provides a valuable corpus of identitylinked trades. We use this data set to explore whether the market reacted to or anticipated official communications, how selfreported trader confidence had little relation to actual performance, and how trade frequencies suggest a power law distribution. Most significantly, the data enabled us to evaluate two competing hypotheses about how markets aggregate information, the Marginal Trader Hypothesis and the Hayek Hypothesis; the data strongly support the former.
Decision Rules and Decision Markets
"... We explore settings where a principal must make a decision about which action to take to achieve a desired outcome. The principal elicits the probability of achieving the outcome by following each action from a selfinterested (but decisionagnostic) expert. We prove results about the relation betwe ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We explore settings where a principal must make a decision about which action to take to achieve a desired outcome. The principal elicits the probability of achieving the outcome by following each action from a selfinterested (but decisionagnostic) expert. We prove results about the relation between the principal’s decision rule and the scoring rules that determine the expert’s payoffs. For the most natural decision rule (where the principal takes the action with highest success probability), we prove that no symmetric scoring rule, nor any of Winkler’s asymmetric scoring rules, have desirable incentive properties. We characterize the set of differentiable scoring rules with desirable incentive properties and construct one. We then consider decision markets, where the role of a single expert is replaced by multiple agents that interact by trading with an automated market maker. We prove a surprising impossibility for this setting: an agent can always benefit from exaggerating the success probability of a suboptimal action. To mitigate this, we construct automated market makers that minimize manipulability. Finally, we consider two alternative decision market designs. We prove the first, in which all outcomes live in the same probability universe, has poor incentive properties. The second, in which the experts trade in the probability of the outcome occurring unconditionally, exhibits a new kind of notrade phenomenon.
Efficient market making via convex optimization, and a connection to online learning
 ACM Transactions on Economics and Computation. To Appear
, 2012
"... We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any market satisfying a set of intuitive conditions must price securities via a convex cost function, which is constructed via conjugate duality. Rather than deal with an exponentially large or infinite outcome space directly, our framework only requires optimization over a convex hull. By reducing the problem of automated market making to convex optimization, where many efficient algorithms exist, we arrive at a range of new polynomialtime pricing mechanisms for various problems. We demonstrate the advantages of this framework with the design of some particular markets. We also show that by relaxing the convex hull we can gain computational tractability without compromising the market institution’s bounded budget. Although our framework was designed with the goal of deriving efficient automated market makers for markets with very large outcome spaces, this framework also provides new insights into the relationship between market design and machine learning, and into the complete market setting. Using our framework, we illustrate the mathematical parallels between cost function based markets and online learning and establish a correspondence between cost function based markets and market scoring rules for complete markets. 1
Information elicitation for decision making
, 2011
"... Proper scoring rules, particularly when used as the basis for a prediction market, are powerful tools for eliciting and aggregating beliefs about events such as the likely outcome of an election or sporting event. Such scoring rules incentivize a single agent to reveal her true beliefs about the eve ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Proper scoring rules, particularly when used as the basis for a prediction market, are powerful tools for eliciting and aggregating beliefs about events such as the likely outcome of an election or sporting event. Such scoring rules incentivize a single agent to reveal her true beliefs about the event. Othman and Sandholm [16] introduced the idea of a decision rule to examine these problems in contexts where the information being elicited is conditional on some decision alternatives. For example, “What is the probability having ten million viewers if we choose to air new television show X? What if we choose Y? ” Since only one show can actually air in a slot, only the results under the chosen alternative can ever be observed. Othman and Sandholm developed proper scoring rules (and thus decision markets) for a single, deterministic decision rule: always select the the action with the greatest probability of success. In this work we significantly generalize their results, developing scoring rules for other deterministic decision rules, randomized decision rules, and situations where there may be more than two outcomes (e.g. less than a million viewers, more than one but less than ten, or more than ten million).
Comparing Prediction Market Structures, With an Application to Market Making
, 1009
"... Ensuring sufficient liquidity is one of the key challenges for designers of prediction markets. Various market making algorithms have been proposed in the literature and deployed in practice, but there has been little effort to evaluate their benefits and disadvantages in a systematic manner. We int ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Ensuring sufficient liquidity is one of the key challenges for designers of prediction markets. Various market making algorithms have been proposed in the literature and deployed in practice, but there has been little effort to evaluate their benefits and disadvantages in a systematic manner. We introduce a novel experimental design for comparing market structures in live trading that ensures fair comparison between two different microstructures with the same trading population. Participants trade on outcomes related to a twodimensional random walk that they observe on their computer screens. They can simultaneously trade in two markets, corresponding to the independent horizontal and vertical random walks. We use this experimental design to compare the popular inventorybased logarithmic market scoring rule (LMSR) market maker and a new information based Bayesian market maker (BMM). Our experiments reveal that BMM can offer significant benefits in terms of price stability and expected loss when controlling for liquidity; the caveat is that, unlike LMSR, BMM does not guarantee bounded loss. Our investigation also elucidates some general properties of market makers in prediction markets. In particular, there is an inherent tradeoff between adaptability to market shocks and convergence during market equilibrium. 1
Automated Market Makers That Enable New Settings: Extending ConstantUtility Cost Functions
"... Automated market makers are algorithmic agents that provide liquidity in electronic markets. We construct two new automated market makers that each solve an open problem of theoretical and practical interest. First, we formulate a market maker that has bounded loss over separable measure spaces. Th ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Automated market makers are algorithmic agents that provide liquidity in electronic markets. We construct two new automated market makers that each solve an open problem of theoretical and practical interest. First, we formulate a market maker that has bounded loss over separable measure spaces. This opens up an exciting new set of domains for prediction markets, including markets on locations and markets where events correspond to the natural numbers. Second, by shifting profits into liquidity, we create a market maker that has bounded loss in addition to a bid/ask spread that gets arbitrarily small with trading volume. This market maker matches important attributes of real human market makers and suggests a path forward for integrating automated market making agents into markets with real money.
Crowd IQ: aggregating opinions to boost performance
 In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent SystemsVolume 1
, 2012
"... We show how the quality of decisions based on the aggregated opinions of the crowd can be conveniently studied using a sample of individual responses to a standard IQ questionnaire. We aggregated the responses to the IQ questionnaire using simple majority voting and a machine learning approach based ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We show how the quality of decisions based on the aggregated opinions of the crowd can be conveniently studied using a sample of individual responses to a standard IQ questionnaire. We aggregated the responses to the IQ questionnaire using simple majority voting and a machine learning approach based on a probabilistic graphical model. The score for the aggregated questionnaire, Crowd IQ, serves as a quality measure of decisions based on aggregating opinions, which also allows quantifying individual and crowd performance on the same scale. We show that Crowd IQ grows quickly with the size of the crowd but saturates, and that for small homogeneous crowds the Crowd IQ significantly exceeds the IQ of even their most intelligent member. We investigate alternative ways of aggregating the responses and the impact of the aggregation method on the resulting Crowd IQ. We also discuss Contextual IQ, a method of quantifying the individual participant’s contribution to the Crowd IQ based on the Shapley value from cooperative game theory.