Results 1  10
of
25
Strictly Proper Scoring Rules, Prediction, and Estimation
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Abstract

Cited by 143 (17 self)
 Add to MetaCart
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he or she issues the probabilistic forecast F, rather than G ̸ = F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical, and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper scoring rules to Bayes factors and to crossvalidation, and propose a novel form of crossvalidation known as randomfold crossvalidation. A case study on probabilistic weather forecasts in the North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile
A new understanding of prediction markets via noregret learning
 In ACM EC
, 2010
"... We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and noregret learning. We first show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and noregret learning. We first show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from expert advice by equating the set of outcomes on which bets are placed in the market with the set of experts in the learning setting, and equating trades made in the market with losses observed by the learning algorithm. If the loss of the market organizer is bounded, this bound can be used to derive an O ( √ T) regret bound for the corresponding learning algorithm. We then show that the class of markets with convex cost functions exactly corresponds to the class of Follow the Regularized Leader learning algorithms, with the choice of a cost function in the market corresponding to the choice of a regularizer in the learning problem. Finally, we show an equivalence between market scoring rules and prediction markets with convex cost functions. This implies both that any market scoring rule can be implemented as a cost function based market maker, and that market scoring rules can be interpreted naturally as Follow the Regularized Leader algorithms. These connections provide new insight into how it is that commonly studied markets, such as the Logarithmic Market Scoring Rule, can aggregate opinions into accurate estimates of the likelihood of future events.
Probabilistic inference for future climate using an ensemble of climate model evaluations
 Climatic Change
, 2007
"... This paper describes an approach to computing probabilistic assessments of future climate, using a climate model. It clarifies the nature of probability in this context, and illustrates the kinds of judgements that must be made in order for such a prediction to be consistent with the probability cal ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
This paper describes an approach to computing probabilistic assessments of future climate, using a climate model. It clarifies the nature of probability in this context, and illustrates the kinds of judgements that must be made in order for such a prediction to be consistent with the probability calculus. The climate model is seen as a tool for making probabilistic statements about climate itself, necessarily involving an assessment of the model’s imperfections. A climate event, such as a 2◦C increase in global mean temperature, is identified with a region of ‘climatespace’, and the ensemble of model evaluations is used within a numerical integration designed to estimate the probability assigned to that region.
Subjective Bayesian Analysis: Principle and practice
 BAYESIAN ANALYSIS
, 2006
"... We address the position of subjectivism within Bayesian statistics. We argue, first, that the subjectivist Bayes approach is the only feasible method for tackling many important practical problems. Second, we describe the essential role of the subjectivist approach in scientific analysis. Third, we ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We address the position of subjectivism within Bayesian statistics. We argue, first, that the subjectivist Bayes approach is the only feasible method for tackling many important practical problems. Second, we describe the essential role of the subjectivist approach in scientific analysis. Third, we consider possible modifications to the Bayesian approach from a subjectivist viewpoint. Finally, we address the issue of pragmatism in implementing the subjectivist approach.
Default estimation for lowdefault portfolios
 Journal of Empirical Finance
, 2009
"... The problem in default probability estimation for lowdefault portfolios is that there is little relevant historical data information. No amount of data processing can fix this problem. More information is required. Incorporating expert opinion formally is an attractive option. ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The problem in default probability estimation for lowdefault portfolios is that there is little relevant historical data information. No amount of data processing can fix this problem. More information is required. Incorporating expert opinion formally is an attractive option.
Efficient market making via convex optimization, and a connection to online learning
 ACM Transactions on Economics and Computation. To Appear
, 2012
"... We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We propose a general framework for the design of securities markets over combinatorial or infinite state or outcome spaces. The framework enables the design of computationally efficient markets tailored to an arbitrary, yet relatively small, space of securities with bounded payoff. We prove that any market satisfying a set of intuitive conditions must price securities via a convex cost function, which is constructed via conjugate duality. Rather than deal with an exponentially large or infinite outcome space directly, our framework only requires optimization over a convex hull. By reducing the problem of automated market making to convex optimization, where many efficient algorithms exist, we arrive at a range of new polynomialtime pricing mechanisms for various problems. We demonstrate the advantages of this framework with the design of some particular markets. We also show that by relaxing the convex hull we can gain computational tractability without compromising the market institution’s bounded budget. Although our framework was designed with the goal of deriving efficient automated market makers for markets with very large outcome spaces, this framework also provides new insights into the relationship between market design and machine learning, and into the complete market setting. Using our framework, we illustrate the mathematical parallels between cost function based markets and online learning and establish a correspondence between cost function based markets and market scoring rules for complete markets. 1
Collective revelation: A mechanism for selfverified, weighted, and truthful predictions
 In: Proc. 10th ACM Conf. on Electronic Commerce
, 2009
"... Decision makers can benefit from the subjective judgment of experts. For example, estimates of disease prevalence are quite valuable, yet can be difficult to measure objectively. Useful features of mechanisms for aggregating expert opinions include the ability to: (1) incentivize participants to be ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Decision makers can benefit from the subjective judgment of experts. For example, estimates of disease prevalence are quite valuable, yet can be difficult to measure objectively. Useful features of mechanisms for aggregating expert opinions include the ability to: (1) incentivize participants to be truthful; (2) adjust for the fact that some experts are better informed than others; and (3) circumvent the need for objective, “ground truth ” observations. Subsets of these properties are attainable by previous elicitation methods, including proper scoring rules, prediction markets, and the Bayesian truth serum. Our mechanism of collective revelation, however, is the first to simultaneously achieve all three. Furthermore, we introduce a general technique for constructing budgetbalanced mechanisms—where no net payments are made to participants—that applies both to collective revelation and to past peerprediction methods.
Default Estimation and Expert Information
, 2008
"... Default is a rare event, even in segments in the midrange of a bank’s portfolio. Inference about default rates is essential for risk management and for compliance with the requirements of Basel II. Most commercial loans are in the middlerisk categories and are to unrated companies. Expert informati ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Default is a rare event, even in segments in the midrange of a bank’s portfolio. Inference about default rates is essential for risk management and for compliance with the requirements of Basel II. Most commercial loans are in the middlerisk categories and are to unrated companies. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using a prior distribution assessed from an industry expert. The binomial model, most common in applications, is extended to allow correlated defaults. A check of robustness is illustrated with an ɛ − mixture of priors.
Designing informative securities
 In UAI
, 2012
"... We create a formal framework for the design of informative securities in prediction markets. These securities allow a market organizer to infer the likelihood of events of interest as well as if he knew all of the traders’ private signals. We consider the design of markets that are always informativ ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We create a formal framework for the design of informative securities in prediction markets. These securities allow a market organizer to infer the likelihood of events of interest as well as if he knew all of the traders’ private signals. We consider the design of markets that are always informative, markets that are informative for a particular signal structure of the participants, and informative markets constructed from a restricted selection of securities. We find that to achieve informativeness, it can be necessary to allow participants to express information that may not be directly of interest to the market organizer, and that understanding the participants’ signal structure is important for designing informative prediction markets. 1
Elicitation of Multivariate Prior Distributions: A nonparametric Bayesian approach
"... In the context of Bayesian statistical analysis, elicitation is the process of formulating a prior density f(·) about one or more uncertain quantities to represent a person’s knowledge and beliefs. Several different methods of eliciting prior distributions for one unknown parameter have been propose ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In the context of Bayesian statistical analysis, elicitation is the process of formulating a prior density f(·) about one or more uncertain quantities to represent a person’s knowledge and beliefs. Several different methods of eliciting prior distributions for one unknown parameter have been proposed. However, there are relatively few methods for specifying a multivariate prior distribution and most are just applicable to specific classes of problems and/or based on restrictive conditions, such as independence of variables. Besides, many of these procedures require the elicitation of variances and correlations, and sometimes elicitation of hyperparameters which are difficult for experts to specify in practice. Garthwaite, Kadane and O’Hagan (2005) discuss the different methods proposed in the literature and the difficulties of eliciting multivariate prior distributions. We describe a flexible method of eliciting multivariate prior distributions applicable to a wide class of practical problems. Our approach does not assume a parametric form for the unknown prior density f(·), instead we use nonparametric Bayesian inference, modelling f(·) by a Gaussian process prior distribution. The expert is then asked to specify certain summaries of his/her distribution, such as the mean, mode, marginal quantiles and a small number of joint probabilities. The analyst receives that information, treating it as a data set D with which to update his/her prior beliefs to obtain the posterior distribution for f(·). Theoretical properties of joint and marginal priors are derived and numerical illustrations to demonstrate our approach are given.