Results 1 
7 of
7
Evaluating and combining subjective probability estimates
 Journal of Behavioral Decision Making
, 1997
"... This paper concerns the evaluation and combination of subjective probability estimates for categorical events. We argue that the appropriate criterion for evaluating individual and combined estimates depends on the type of uncertainty the decision maker seeks to represent, which in turn depends on h ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper concerns the evaluation and combination of subjective probability estimates for categorical events. We argue that the appropriate criterion for evaluating individual and combined estimates depends on the type of uncertainty the decision maker seeks to represent, which in turn depends on his or her model of the event space. Decision makers require accurate estimates in the presence of aleatory uncertainty about exchangeable events, diagnostic estimates given epistemic uncertainty about unique events, and some combination of the two when the events are not necessarily unique, but the best equivalence class de®nition for exchangeable events is not apparent. Following a brief reveiw of the mathematical and empirical literature on combining judgments, we present an approach to the topic that derives from (1) a weak cognitive model of the individual that assumes subjective estimates are a function of underlying judgment perturbed by random error and (2) a classi®cation of judgment contexts in terms of the underlying information structure. In support of our developments, we present new analyses of two sets of subjective probability estimates, one of exchangeable and the other of unique events. As predicted, mean estimates were more accurate than the individual values in the ®rst case and more diagnostic in
Information markets vs. opinion pools: An empirical comparison
 In Proceedings of the Sixth ACM Conference on Electronic Commerce (EC’05
, 2005
"... In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two dif ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.
Aggregating Learned Probabilistic Beliefs
, 2001
"... We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature ge ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature generates samples from a `true' distribution and different experts form their beliefs based on the subsets of the data they have a chance to observe. Naturally, the optimal aggregate distribution would be the one learned from the combined sample sets. Such a formulation leads to a natural way to measure the accuracy of the aggregation mechanism. We show that the wellknown aggregation operator LinOP is ideally suited for that task. We propose a LinOPbased learning algorithm, inspired by the techniques developed for Bayesian learning, which aggregates the experts' distributions represented as Bayesian networks. We show experimentally that this algorithm performs well in practice. 1
A Market Framework for Pooling Opinions
, 1998
"... Consider a group of Bayesians, each with a subjective probability distribution over a set of uncertain events. An opinion pool derives a single consensus distribution over the events, representative of the group as a whole. Several pooling functions have been proposed, each sensible under particular ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Consider a group of Bayesians, each with a subjective probability distribution over a set of uncertain events. An opinion pool derives a single consensus distribution over the events, representative of the group as a whole. Several pooling functions have been proposed, each sensible under particular assumptions or measures. Many researchers over many years have failed to form a consensus on which method is best. We propose a marketbased pooling procedure, and analyze its properties. Participants bet on securities, each paying off contingent on an uncertain event, so as to maximize their own expected utilities. The consensus probability of each event is defined as the corresponding security's equilibrium price. The market framework provides explicit monetary incentives for participation and honesty, and allows agents to maintain individual rationality and limited privacy. "No arbitrage" arguments ensure that the equilibrium prices form legal probabilities. We show that, when events are...
Learning Performance of Prediction Markets with Kelly Bettors
, 2012
"... In evaluating prediction markets (and other crowdprediction mechanisms), investigators have repeatedly observed a socalled wisdom of crowds effect, which can be roughly summarized as follows: the average of participants performs much better than the average participant. The market price— an average ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In evaluating prediction markets (and other crowdprediction mechanisms), investigators have repeatedly observed a socalled wisdom of crowds effect, which can be roughly summarized as follows: the average of participants performs much better than the average participant. The market price— an average or at least aggregate of traders ’ beliefs—offers a better estimate than most any individual trader’s opinion. In this paper, we ask a stronger question: how does the market price compare to the best trader’s belief, not just the average trader. We measure the market’s worstcase log regret, a notion common in machine learning theory. To arrive at a meaningful answer, we need to assume something about how traders behave. We suppose that every trader optimizes according to the Kelly criteria, a strategy that
Modeling Protein Secondary Structure by Products of Dependent Experts
 Master of Mathematics in Computer Science
, 2001
"... A phenomenon as complex as protein folding requires a complex model to approximate it. This thesis presents a bottomup approach for building complex probabilistic models of protein secondary structure by incorporating the multiple information sources which we call experts. Expert opinions are repre ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A phenomenon as complex as protein folding requires a complex model to approximate it. This thesis presents a bottomup approach for building complex probabilistic models of protein secondary structure by incorporating the multiple information sources which we call experts. Expert opinions are represented by probability distributions over the set of possible structures. Bayesian treatment of a group of experts results in a consensus opinion that combines the experts’ probability distributions using the operators of normalized product, quotient and exponentiation. The expression of this consensus opinion simplifies to a product of the expert opinions with two assumptions: (1) balanced training of experts, i.e., uniform prior probability over all structures, and (2) conditional independence between expert opinions, given the structure. This research also studies how Markov chains and hidden Markov models may be used to
Advances: Aggregate Probabilities Page 1 of 39 Ch 09 060430 V07 9 Aggregation of Expert Probability Judgments
"... probability distributions from experts in risk analysis, ” Risk Analysis, 19, 187203. This chapter is concerned with the aggregation of probability distributions in decision and risk analysis. Experts often provide valuable information regarding important uncertainties in decision and risk analyses ..."
Abstract
 Add to MetaCart
probability distributions from experts in risk analysis, ” Risk Analysis, 19, 187203. This chapter is concerned with the aggregation of probability distributions in decision and risk analysis. Experts often provide valuable information regarding important uncertainties in decision and risk analyses because of the limited availability of “hard data ” to use in those analyses. Multiple experts are often consulted in order to obtain as much information as possible, leading to the problem of how to combine or aggregate their information. Information may also be obtained from other sources such as forecasting techniques or scientific models. Because uncertainties are typically represented in terms of probability distributions, we consider expert and other information in terms of probability distributions. We discuss a variety of models that lead to specific combination methods. The output of these methods is a “combined probability distribution, ” which can be viewed as representing a summary of the current state of information regarding the uncertainty of interest. After presenting the models and methods, we discuss empirical evidence on the performance of the methods. In the conclusion we highlight important