Results 1  10
of
13
Information markets vs. opinion pools: An empirical comparison
 In Proceedings of the Sixth ACM Conference on Electronic Commerce (EC’05
, 2005
"... In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two dif ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.
Probabilities of judgments provided by unknown experts by using the imprecise Dirichlet model
 Risk, Decision and Policy, 9(4):391 – 400
, 2004
"... Most models of aggregating expert judgments assume that there is available some information characterizing the experts. This information may be incorporated into hierarchical uncertainty models (secondorder models). However, very often we do not know anything about experts or it is difficult to ev ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
Most models of aggregating expert judgments assume that there is available some information characterizing the experts. This information may be incorporated into hierarchical uncertainty models (secondorder models). However, very often we do not know anything about experts or it is difficult to evaluate their quality. In this case, beliefs to experts may be in the interval [0,1] and the resulting assessments become to be noninformative. Moreover, attempts to assign some weights or beliefs to experts were not crowned with success because the behavior of experts may be distinguished in different circumstances. Therefore, this paper proposes to estimate expert judgments instead of experts themselves and studies how to assign interval probabilities of expert judgments by using the multinomial model.
Information Sciences and Technology
"... Sigatures are on file in the Graduate School. iii In almost all walks of life, predicting uncertain future events plays an essential role in decisionmaking processes. However, information related to future events frequently exists only as dispersed opinions, insights, and intuitions of individuals. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Sigatures are on file in the Graduate School. iii In almost all walks of life, predicting uncertain future events plays an essential role in decisionmaking processes. However, information related to future events frequently exists only as dispersed opinions, insights, and intuitions of individuals. Each individual only knows a little, but aggregating the dispersed information together may make considerable contribution to decision making. This is typical in many domains including business, politics, and entertainment. Therefore, how to aggregate such dispersed information for useful decision support is a crucial task. Markets have shown great potential as one of the most effective mechanisms for gathering distributed information and generating accurate forecasts, often surpassing many existing methods in practice. This research studies information markets, markets that are specially designed for information aggregation and forecasting, from four different perspectives: theoretical examination, experimental evaluation, empirical analysis, and design.
RISK ANALYSIS ON THE BASIS OF JUDGMENTS SUPPLIED BY UNKNOWN EXPERTS
"... The development of a system requires fulfilling the available standards of reliability and safety. Due to possible complexity of the system, its parameters often are determined by experts whose judgements are usually imprecise and unreliable due to the limited precision of human assessments. Therefo ..."
Abstract
 Add to MetaCart
(Show Context)
The development of a system requires fulfilling the available standards of reliability and safety. Due to possible complexity of the system, its parameters often are determined by experts whose judgements are usually imprecise and unreliable due to the limited precision of human assessments. Therefore, an approach for computing probabilities of expert judgments and for analysing the risk of decision about satisfying the parameters to standards of reliability and safety is proposed in the paper. A numerical example considering a microprocessor system of central train control illustrates the proposed approach.
Utkin, L.V.: An uncertainty model of structural reliability 1
"... An uncertainty model of structural reliability with imprecise parameters of probability distributions An approach to compute bounds for the structural reliability by imprecise parameters of the stress and strength probability distributions is proposed. The approach is based on using imprecise probab ..."
Abstract
 Add to MetaCart
(Show Context)
An uncertainty model of structural reliability with imprecise parameters of probability distributions An approach to compute bounds for the structural reliability by imprecise parameters of the stress and strength probability distributions is proposed. The approach is based on using imprecise probability theory and takes into account different types of independence of the stress, strength and their parameters. It is shown that computation of the imprecise stressstrength model can be reduced to solution of a number of linear programming problems. Special cases of the exponentially distributed stress and strength are considered. Various numerical examples illustrate the approach and show the impact of the independence conditions on imprecision of results.
CAUTIOUS ANALYSIS OF PROJECT RISKS BY INTERVALVALUED INITIAL DATA
, 2005
"... One of the most common performance measures in selection and management of projects is the Net Present Value (NPV). In the paper, we study a case when initial data about the NPV parameters (cash flows and the discount rate) are represented in the form of intervals supplied by experts. A method for c ..."
Abstract
 Add to MetaCart
(Show Context)
One of the most common performance measures in selection and management of projects is the Net Present Value (NPV). In the paper, we study a case when initial data about the NPV parameters (cash flows and the discount rate) are represented in the form of intervals supplied by experts. A method for computing the NPV based on using random set theory is proposed and three conditions of independence of the parameters are taken into account. Moreover, the imprecise Dirichlet model for obtaining more cautious bounds for the NPV is considered. Numerical examples illustrate the proposed approach for computing the NPV.
ABSTRACT Information Markets vs. Opinion Pools: An Empirical Comparison
"... In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two dif ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people’s subjective probability judgments on 2003 US National Football League games and compare with the “market probabilities ” given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.
probability
"... method for processing the unreliable expert judgments about parameters of ..."
Abstract
 Add to MetaCart
(Show Context)
method for processing the unreliable expert judgments about parameters of
Secondorder uncertainty calculations by using the imprecise Dirichlet model
"... Natural extension is a powerful tool for combining the expert judgments in the framework of imprecise probability theory. However, it assumes that every judgment is “true ” and this fact leads to some difficulties in many applications. Therefore, a secondorder uncertainty model is considered in the ..."
Abstract
 Add to MetaCart
(Show Context)
Natural extension is a powerful tool for combining the expert judgments in the framework of imprecise probability theory. However, it assumes that every judgment is “true ” and this fact leads to some difficulties in many applications. Therefore, a secondorder uncertainty model is considered in the paper where probabilities on the secondorder level are taken by using the imprecise Dirichlet model. The approach proposed in the paper is illustrated by application and auxiliary examples.