Results 1  10
of
24
Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment
 Psychological Review
, 1983
"... Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and.P(B), because the extension (or the possibility set) of the conjunction is included in the extension ..."
Abstract

Cited by 239 (2 self)
 Add to MetaCart
Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and.P(B), because the extension (or the possibility set) of the conjunction is included in the extension of its constituents. Judgments under uncertainty, however, are often mediated by intuitive heuristics that are not bound by the conjunction rule. A conjunction can be more representative than one of its constituents, and instances of a specific category can be easier to imagine or to retrieve than instances of a more inclusive category. The representativeness and availability heuristics therefore can make a conjunction appear more probable than one of its constituents. This phenomenon is demonstrated in a variety of contexts including estimation of word frequency, personality judgment, medical prognosis, decision under risk, suspicion of criminal acts, and political forecasting. Systematic violations of the conjunction rule are observed in judgments of lay people and of experts in both betweensubjects and withinsubjects comparisons. Alternative interpretations of the conjunction fallacy are discussed and attempts to combat it are explored. Uncertainty is an unavoidable aspect of the the last decade (see, e.g., Einhorn & Hogarth, human condition. Many significant choices must be based on beliefs about the likelihood
Expert conciliation for multi modal person authentication systems by Bayesian statistics
, 1997
"... We present an algorithm functioning as a supervisor module in a multi expert decision making machine. It uses the Bayes theory in order to estimate the biases of individual expert opinions. These are then used to calibrate and conciliate expert opinions to one opinion. We present a framework for ..."
Abstract

Cited by 58 (14 self)
 Add to MetaCart
We present an algorithm functioning as a supervisor module in a multi expert decision making machine. It uses the Bayes theory in order to estimate the biases of individual expert opinions. These are then used to calibrate and conciliate expert opinions to one opinion. We present a framework for simulating decision strategies using expert opinions whose properties are easily modifiable. By using real data coming from a person authentication system using image and speech data we were able to confirm that the proposed supervisor improves the quality of individual expert decisions by reaching success rates of 99.5 %.
Languages and Designs for Probability Judgment
, 1985
"... Theories of subjective probobility ore viewed OS formal languages for onolyzing evidence ond expressing degrees of belief. This article focuses on two probobility Iongouges, the Boyesion longuoge ond the longuoge of belief functions (Shofer, 1976). We describe and compare the semantics (i.e., the me ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
Theories of subjective probobility ore viewed OS formal languages for onolyzing evidence ond expressing degrees of belief. This article focuses on two probobility Iongouges, the Boyesion longuoge ond the longuoge of belief functions (Shofer, 1976). We describe and compare the semantics (i.e., the meoning of the scale) ond the syntax (i.e., the formol coIcuIus) of these Ionguoges. We also investigote some of the designs for probobility judgment afforded by the two languages.
Combining probability distributions from dependent information sources
 Management Sci
, 1981
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Aggregating disparate estimates of chance
, 2004
"... We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables but may otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We ad ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We consider a panel of experts asked to assign probabilities to events, both logically simple and complex. The events evaluated by different experts are based on overlapping sets of variables but may otherwise be distinct. The union of all the judgments will likely be probabilistic incoherent. We address the problem of revising the probability estimates of the panel so as to produce a coherent set that best represents the group’s expertise.
A Theory Of Classifier Combination: The Neural Network Approach
, 1995
"... There is a trend in recent OCR development to improve system performance by combining recognition results of several complementary algorithms. This thesis examines the classifier combination problem under strict separation of the classifier and combinator design. None other than the fact that every ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
There is a trend in recent OCR development to improve system performance by combining recognition results of several complementary algorithms. This thesis examines the classifier combination problem under strict separation of the classifier and combinator design. None other than the fact that every classifier has the same input and output specification is assumed about the training, design or implementation of the classifiers. A general theory of combination should possess the following properties. It must be able to combine anytype of classifiers regardless of the level of information contents in the outputs. In addition, a general combinator must be able to combine any mixture of classifier types and utilize all information available. Since classifier independence is difficult to achieve and to detect, it is essential for a combinator to handle correlated classifiers robustly. Although the performance of a robust (against correlation) combinator can be improved by adding classifiers indiscriminantly, it is generally of interest to achieve comparable performance with the minimum number of classifiers. Therefore, the combinator should have the ability to eliminate redundant classifiers. Furthermore, it is desirable to have a complexity control mechanism for the combinator. In the past, simplifications come from assumptions and constraints imposed by the system designers. In the general theory, there should be a mechanism to reduce solution complexity by exercising nonclassifierspecific constraints. Finally, a combinator should capture classifier/image dependencies. Nearly all combination methods have ignored the fact that classifier performances (and outputs) depend on various image characteristics, and this dependency is manifested in classifier output patterns in relation to input imag...
Default estimation for lowdefault portfolios
 Journal of Empirical Finance
, 2009
"... The problem in default probability estimation for lowdefault portfolios is that there is little relevant historical data information. No amount of data processing can fix this problem. More information is required. Incorporating expert opinion formally is an attractive option. ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The problem in default probability estimation for lowdefault portfolios is that there is little relevant historical data information. No amount of data processing can fix this problem. More information is required. Incorporating expert opinion formally is an attractive option.
Eliminating Incoherence from Subjective Estimates of Chance
 In: Proceedings of the 8th International Conference on the Principles of Knowledge Representation and Reasoning (KR
, 2002
"... Human judgment is an essential source of Bayesian probabilities but is plagued by incoherence when complex or conditional events are involved. We consider a method for adjusting estimates of chance over Boolean events so as to render them probabilistically coherent. The method works by searching for ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Human judgment is an essential source of Bayesian probabilities but is plagued by incoherence when complex or conditional events are involved. We consider a method for adjusting estimates of chance over Boolean events so as to render them probabilistically coherent. The method works by searching for a sparse distribution that approximates a target set of judgments. (We show that sparse distributions suce for this purpose.) The feasibility of our method was tested by randomly generating sets of coherent and incoherent estimates of chance over 30 to 50 variables.
Coherent probability from incoherent judgment
 Journal of Experimental Psychology: Applied
, 2001
"... People often have knowledge about the chances of events but are unable to express their knowledge in the form of coherent probabilities. This study proposed to correct incoherent judgment via an optimization procedure that seeks the (coherent) probability distribution nearest to a judge's estimates ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
People often have knowledge about the chances of events but are unable to express their knowledge in the form of coherent probabilities. This study proposed to correct incoherent judgment via an optimization procedure that seeks the (coherent) probability distribution nearest to a judge's estimates of chance. This method was applied to the chances of simple and complex meteorological events, as estimated by college undergraduates. No judge responded coherently, but the optimization method found close (coherent) approximations to their estimates. Moreover, the approximations were reliably more accurate than the original estimates, as measured by the quadratic scoring rule. Methods for correcting incoherence facilitate the analysis of expected utility and allow human judgment to be more easily exploited in the construction of expert systems. Suppose you think the probability that the Internet will expand next year is.90. Suppose you also think the probability that the Internet will expand and PC makers will be profitable is.91. Then you have assigned a greater chance to a conjunction rather than to one of its conjuncts; hence, your judgments are incoherent. You may, nonetheless, prove to be more insightful than someone with
Aggregating probabilistic forecasts from incoherent and abstaining experts. Decision Anal
, 2008
"... doi 10.1287/deca.1080.0119 ..."