Results 1  10
of
21
Eliciting Informative Feedback: The PeerPrediction Method
 Management Science
, 2005
"... informs ® doi 10.1287/mnsc.1050.0379 ..."
Eliciting Properties of Probability Distributions
 In Proceedings of the ninth ACM conference on electronic commerce
, 2008
"... We investigate the problem of incentivizing an expert to truthfully reveal probabilistic information about a random event. Probabilistic information consists of one or more properties, which are any realvalued functions of the distribution, such as the mean and variance. Not all properties can be e ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We investigate the problem of incentivizing an expert to truthfully reveal probabilistic information about a random event. Probabilistic information consists of one or more properties, which are any realvalued functions of the distribution, such as the mean and variance. Not all properties can be elicited truthfully. We provide a simple characterization of elicitable properties, and describe the general form of the associated payment functions that induce truthful revelation. We then consider sets of properties, and observe that all properties can be inferred from sets of elicitable properties. This suggests the concept of elicitation complexity for a property, the size of the smallest set implying the property.
Information, Divergence and Risk for Binary Experiments
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2009
"... We unify fdivergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROCcurves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
We unify fdivergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROCcurves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all are related to costsensitive binary classification. As well as developing relationships between generative and discriminative views of learning, the new machinery leads to tight and more general surrogate regret bounds and generalised Pinsker inequalities relating fdivergences to variational divergence. The new viewpoint also illuminates existing algorithms: it provides a new derivation of Support Vector Machines in terms of divergences and relates Maximum Mean Discrepancy to Fisher Linear Discriminants.
Selffinanced wagering mechanisms for forecasting
 EC
"... We examine a class of wagering mechanisms designed to elicit truthful predictions from a group of people without requiring any outside subsidy. We propose a number of desirable properties for wagering mechanisms, identifying one mechanism—weightedscore wagering—that satisfies all of the properties. ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We examine a class of wagering mechanisms designed to elicit truthful predictions from a group of people without requiring any outside subsidy. We propose a number of desirable properties for wagering mechanisms, identifying one mechanism—weightedscore wagering—that satisfies all of the properties. Moreover, we show that a singleparameter generalization of weightedscore wagering is the only mechanism that satisfies these properties. We explore some variants of the core mechanism based on practical considerations. Categories and Subject Descriptors
Interpreting and Unifying Outlier Scores
"... Outlier scores provided by different outlier models differ widely in their meaning, range, and contrast between different outlier models and, hence, are not easily comparable or interpretable. We propose a unification of outlier scores provided by various outlier models and a translation of the arbi ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Outlier scores provided by different outlier models differ widely in their meaning, range, and contrast between different outlier models and, hence, are not easily comparable or interpretable. We propose a unification of outlier scores provided by various outlier models and a translation of the arbitrary “outlier factors ” to values in the range [0, 1] interpretable as values describing the probability of a data object of being an outlier. As an application, we show that this unification facilitates enhanced ensembles for outlier detection. 1
Eliciting Truthful Answers to MultipleChoice Questions Preliminary Report
"... Motivated by the prevalence of online questionnaires in electronic commerce, and of multiplechoice questions in such questionnaires, we consider the problem of eliciting truthful answers to multiplechoice questions from a knowledgeable respondent. Specifically, each question is a statement regardi ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Motivated by the prevalence of online questionnaires in electronic commerce, and of multiplechoice questions in such questionnaires, we consider the problem of eliciting truthful answers to multiplechoice questions from a knowledgeable respondent. Specifically, each question is a statement regarding an uncertain future event, and is multiplechoice – the responder must select exactly one of the given answers. The principal offers a payment, whose amount is a function of the answer selected and the true outcome (which the principal will eventually observe). This problem significantly generalizes recent work on truthful elicitation of distribution properties, which itself generalized a long line of work in elicitation of complete distributions. We provide necessary and sufficient conditions for the existence of payments that induce truthful answers, and give a characterization of those payments. We also study in greater details the common case of questions with ordinal answers, and illustrate our results with several examples of practical interest.
Quantiles as optimal point predictors
"... The loss function plays a central role in the theory and practice of forecasting. If the loss is quadratic, the mean of the predictive distribution is the unique optimal point predictor. If the loss is linear, any median is an optimal point forecast. The title of the paper refers to the simple, poss ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The loss function plays a central role in the theory and practice of forecasting. If the loss is quadratic, the mean of the predictive distribution is the unique optimal point predictor. If the loss is linear, any median is an optimal point forecast. The title of the paper refers to the simple, possibly surprising fact that quantiles arise as optimal point predictors under a general class of economically relevant loss functions, to which we refer as generalized piecewise linear (GPL). The level of the quantile depends on a generic asymmetry parameter that reflects the possibly distinct costs of underprediction and overprediction. A loss function for which quantiles are optimal point predictors is necessarily GPL, similarly to the classical fact that a loss function for which the mean is optimal is necessarily of the Bregman type. We prove general versions of these results that apply on any decisionobservation domain and rest on weak assumptions. The empirical relevance of the choices in the transition from the predictive distribution to the point forecast is illustrated on the Bank of England’s density forecasts of United Kingdom inflation rates, and probabilistic predictions of wind energy resources in the Pacific Northwest. Key words and phrases: asymmetric loss function; Bayes predictor; density forecast; mean; median; mode; optimal point predictor; quantile; statistical decision theory 1
Truthful Surveys
"... Abstract. We consider the problem of truthfully sampling opinions of a population for statistical analysis purposes, such as estimating the population distribution of opinions. To obtain accurate results, the surveyor must incentivize individuals to report unbiased opinions. We present a rewarding s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. We consider the problem of truthfully sampling opinions of a population for statistical analysis purposes, such as estimating the population distribution of opinions. To obtain accurate results, the surveyor must incentivize individuals to report unbiased opinions. We present a rewarding scheme to elicit opinions that are representative of the population. In contrast with the related literature, we do not assume a specific information structure. In particular, our method does not rely on a common prior assumption. 1
Elicitation and evaluation of statistical forecasts
, 2010
"... This paper studies mechanisms for eliciting and evaluating statistical forecasts. Nature draws a state at random from a given state space, according to some distribution p. Prior to Nature’s move, a forecaster, who knows p, provides a prediction for a given statistic of p. The mechanism defines the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper studies mechanisms for eliciting and evaluating statistical forecasts. Nature draws a state at random from a given state space, according to some distribution p. Prior to Nature’s move, a forecaster, who knows p, provides a prediction for a given statistic of p. The mechanism defines the forecaster’s payoff as a function of the prediction and the subsequently realized state. When the statistic is continuous with a continuum of values, the payoffs that provide strict incentives to the forecaster exist if and only if the statistic partitions the set of distributions into convex subsets. When the underlying state space is finite, and the statistic takes values in a finite set, these payoffs exist if and only if the partition forms a linear crosssection of a Voronoi diagram—that is, if the partition forms a power diagram—a stronger condition than convexity. In both cases, the payoffs can be fully characterized essentially as weighted averages of base functions. Preliminary versions appear in the proceedings of the 9 th and 10 th ACM Conference on Electronic
Combining Probability Forecasts
, 2008
"... Linear pooling is by the far the most popular method for combining probability forecasts. However, any nontrivial weighted average of two or more distinct, calibrated probability forecasts is necessarily uncalibrated and lacks sharpness. In view of this, linear pooling requires recalibration, even i ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Linear pooling is by the far the most popular method for combining probability forecasts. However, any nontrivial weighted average of two or more distinct, calibrated probability forecasts is necessarily uncalibrated and lacks sharpness. In view of this, linear pooling requires recalibration, even in the ideal case in which the individual forecasts are calibrated. Toward this end, we propose a beta transformed linear opinion pool (BLP) for the aggregation of probability forecasts from distinct, calibrated or uncalibrated sources. The BLP method fits an optimal nonlinearly recalibrated forecast combination, by compositing a beta transform and the traditional linear opinion pool. The technique is illustrated in a simulation example and in a case study on statistical and National Weather Service probability of precipitation forecasts.