Results 1  10
of
575
Bayesian Ignorance
, 2010
"... We quantify the effect of Bayesian ignorance by comparing the social cost obtained in a Bayesian game by agents with local views to the expected social cost of agents having global views. Both benevolent agents, whose goal is to minimize the social cost, and selfish agents, aiming at minimizing thei ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We quantify the effect of Bayesian ignorance by comparing the social cost obtained in a Bayesian game by agents with local views to the expected social cost of agents having global views. Both benevolent agents, whose goal is to minimize the social cost, and selfish agents, aiming at minimizing
Learning Bayesian networks: The combination of knowledge and statistical data
 Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract

Cited by 1158 (35 self)
 Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly
Bayesian Model Selection in Social Research (with Discussion by Andrew Gelman & Donald B. Rubin, and Robert M. Hauser, and a Rejoinder)
 SOCIOLOGICAL METHODOLOGY 1995, EDITED BY PETER V. MARSDEN, CAMBRIDGE,; MASS.: BLACKWELLS.
, 1995
"... It is argued that Pvalues and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a singl ..."
Abstract

Cited by 585 (21 self)
 Add to MetaCart
single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection and accounting for model uncertainty is presented. Implementing this is straightforward using the simple and accurate BIC
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 325 (17 self)
 Add to MetaCart
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem
Discriminative probabilistic models for relational data
, 2002
"... In many supervised learning tasks, the entities to be labeled are related to each other in complex ways and their labels are not independent. For example, in hypertext classification, the labels of linked pages are highly correlated. A standard approach is to classify each entity independently, igno ..."
Abstract

Cited by 415 (12 self)
 Add to MetaCart
, ignoring the correlations between them. Recently, Probabilistic Relational Models, a relational version of Bayesian networks, were used to define a joint probabilistic model for a collection of related entities. In this paper, we present an alternative framework that builds on (conditional) Markov networks
Learning Bayesian Networks With Local Structure
, 1996
"... . We examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability distributions (CPDs) that quantify these networks. This inc ..."
Abstract

Cited by 272 (12 self)
 Add to MetaCart
. We examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability distributions (CPDs) that quantify these networks
Ignoring ignorance is ignorant
, 2003
"... Abstract. 1 When prior probabilities are given as data, there is generally little objection to the use of the Bayes formula or Bayesian networks. On the other hand, when prior probabilities are lacking, Bayesians have the tendency to ignore their ignorance and to make the priors up out of thin air. ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. 1 When prior probabilities are given as data, there is generally little objection to the use of the Bayes formula or Bayesian networks. On the other hand, when prior probabilities are lacking, Bayesians have the tendency to ignore their ignorance and to make the priors up out of thin air
Representing partial ignorance
 IEEE Trans. on Systems, Man and Cybernetics
, 1996
"... Ignorance is precious, for once lost it can never be regained. This paper advocates the use of nonpurely probabilistic approaches to higherorder uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decisiondriven and as a consequenc ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
Ignorance is precious, for once lost it can never be regained. This paper advocates the use of nonpurely probabilistic approaches to higherorder uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decisiondriven and as a
Bayesian Factor Regression Models in the "Large p, Small n" Paradigm
 Bayesian Statistics
, 2003
"... TOR REGRESSION MODELS 1.1 SVD Regression Begin with the linear model y = X# + # where y is the nvector of responses, X is the n p matrix of predictors, # is the pvector regression parameter, and # , # I) is the nvector error term. Of key interest are cases when p >> n, when X is & ..."
Abstract

Cited by 184 (16 self)
 Add to MetaCart
;loadings" matrix, subject to AA # = I and F # F = D where D is the diagonal matrix of k positive singular values, arranged in decreasing order. This reduced form assumes factors with zero singular values have been ignored without loss; k with equality only if all singular values are positive. Now
When Ignorance is Bliss
 UAI 2004
, 2004
"... It is commonlyaccepted wisdom that more information is better, and that information should never be ignored. Here we argue, using both a Bayesian and a nonBayesian analysis, that in some situations you are better off ignoring information if your uncertainty is represented by a set of probability m ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
It is commonlyaccepted wisdom that more information is better, and that information should never be ignored. Here we argue, using both a Bayesian and a nonBayesian analysis, that in some situations you are better off ignoring information if your uncertainty is represented by a set of probability
Results 1  10
of
575