Results 1 
8 of
8
The Posterior Probability of Bayes Nets with Strong Dependences
 Soft Computing
, 1999
"... Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong substantial dependence. Good models map significant deviance from independence and neglect approximate independence or dependence weaker than a noise threshold. This intuition is applied to learning the structure of Bayes nets from data. We determine the conditional posterior probabilities of structures given that the degree of dependence at each of their nodes exceeds a critical noise level. Deviance from independence is measured by mutual information. Arc probabilities are determined by the amount of mutual information the neighbors contribute to a node, is greater than a critical minimum deviance from independence. A Ø 2 approximation for the probability density function of mutual info...
Bayesian Analysis of Random Event Generator Data
, 1990
"... Data from experiments that use random event generators are usually analyzed by classical (frequentist) statistical tests, which summarize the statistical significance of the test statistic as a pvalue. However, classical statistical tests are frequently inappropriate to these data, and the resultin ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Data from experiments that use random event generators are usually analyzed by classical (frequentist) statistical tests, which summarize the statistical significance of the test statistic as a pvalue. However, classical statistical tests are frequently inappropriate to these data, and the resulting pvalues can grossly overestimate the significance of the result. Bayesian analysis shows that a small pvalue may not provide credible evidence that an anomalous phenomenon exists. An easily applied alternative methodology is described and applied to an example from the literature. Introduction In recent years a new type of experiment using a random event generator (REG) has become popular in parapsychological research (Schmidt, 1970; Jahn, Dunne, & Nelson, 1987). This methodology is a modern refinement of the VERITAC technology of Smith, Daglen, Hill, & MottSmith (1963), which itself embodies features of Tyrrell's (1936) experiments. The technique is based on an electronic device driv...
Cognitive Constructivism, Eigen–Solutions, and Sharp Statistical Hypotheses
 Third Conference on the Foundations of Information Science. FIS2005
, 2005
"... Abstract: In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by the Cognitive Constructivism and the FBSTFull Bayesian Significance Test. The construc ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract: In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by the Cognitive Constructivism and the FBSTFull Bayesian Significance Test. The constructivist framework is contrasted with Decision Theory and Falsificationism, the traditional epistemological settings for orthodox Bayesian and frequentist statistics.
Notation....................................................
, 1976
"... citizen of Capriasca, Ticino accepted on the recommendation of ..."
Methods and Criteria for Model Selection
"... Model selection is an important part of any statistical analysis, and indeed is central to the pursuit of science in general. Many authors have examined this question, from both frequentist and Bayesian perspectives, and many tools for selecting the “best model ” have been suggested in the literatur ..."
Abstract
 Add to MetaCart
Model selection is an important part of any statistical analysis, and indeed is central to the pursuit of science in general. Many authors have examined this question, from both frequentist and Bayesian perspectives, and many tools for selecting the “best model ” have been suggested in the literature. This paper considers the various proposals from a Bayesian decision–theoretic perspective.
9 Constructive Probability ∗
"... In a series of papers published in the 1960’s, A. P. Dempster developed a generalization of the Bayesian theory of statistical inference. In A Mathematical Theory of Evidence, published in 1976, I advocated extending Dempster’s work to a general theory of probability judgement. The central idea of t ..."
Abstract
 Add to MetaCart
In a series of papers published in the 1960’s, A. P. Dempster developed a generalization of the Bayesian theory of statistical inference. In A Mathematical Theory of Evidence, published in 1976, I advocated extending Dempster’s work to a general theory of probability judgement. The central idea of this new general theory is that we might decompose our evidence into intuitively independent components, make probability judgements based on each component, and then extend, adapt, and combine these judgements using formal rules. In this way we might be able to construct numerical degrees of belief based on total evidence that is too complicated or confusing to deal with holistically. The systems of numerical degrees of belief that the theory helps us construct are called belief functions. Belief functions have a certain structure, but they are not, in general, additive like Bayesian probability distributions: a belief function Bel may assign a proposition A and its negation A degrees of belief Bel(A) andBel(A) thataddtolessthanone. The theory of belief functions should be sharply distinguished from the ideas on “upper and lower probabilities ” that have been developed by I. J. Good [11], C. A. B. Smith [28], and, more recently, Peter Williams [30, 31]. It is true that the theory’s degrees of belief Bel(A) havesomepropertiesin common with these authors ’ lower probabilities P∗(A). And it is also true that Dempster, in his writing, used the vocabulary of upper and lower probabilities. But the conceptual structure of the theory of belief functions is quite different from the structure underlying Good, Smith, and Williams ’ work.
Bayesian model selection approaches to MIDAS regression.
"... Summary. We describe Bayesian models for economic and financial time series that use regressors sampled at finer frequencies than the outcome of interest. The models are developed within the framework of dynamic linear models, which provide a great level of flexibility and direct interpretation of r ..."
Abstract
 Add to MetaCart
Summary. We describe Bayesian models for economic and financial time series that use regressors sampled at finer frequencies than the outcome of interest. The models are developed within the framework of dynamic linear models, which provide a great level of flexibility and direct interpretation of results. The problem of collinearity of intraperiod observations is solved using model selection and model averaging approaches which, within a Bayesian framework, automatically adjust for multiple comparisons and allows us to accurately account for all uncertainty when predicting future observations. We also introduce novel formulations for the prior distribution on model space that allow us to include additional information in a flexible manner. We illustrate our approach by predicting the gross domestic product of United Stated using the term structure of interest rates.
www.finance.unisg.ch May 2006Testing Conditional Asset Pricing Models Using a Markov Chain Monte Carlo Approach
"... We use Markov Chain Monte Carlo (MCMC) methods for the parameter estimation and the testing of conditional asset pricing models. In contrast to traditional approaches, it is truly conditional because the assumption that time variation in betas is driven by a set of conditioning variables is not nece ..."
Abstract
 Add to MetaCart
We use Markov Chain Monte Carlo (MCMC) methods for the parameter estimation and the testing of conditional asset pricing models. In contrast to traditional approaches, it is truly conditional because the assumption that time variation in betas is driven by a set of conditioning variables is not necessary. Moreover, the approach has exact …nite sample properties and accounts for errorsinvariables. Using S&P 500 panel data, we analyze the empirical performance of the CAPM and the Fama and French (1993) threefactor model. We …nd that timevariation of betas in the CAPM and the time variation of the coe ¢ cients for the size factor (SMB) and the distress factor (HML) in the threefactor model improve the empirical performance. Therefore, our …ndings are consistent with time variation of …rmspeci…c exposure to market risk, systematic credit risk and systematic size e¤ects. However, a Bayesian model comparison trading o ¤ goodness of …t and model complexity indicates that the conditional CAPM performs best, followed by the conditional threefactor model, the unconditional CAPM, and the unconditional threefactor model. JEL: G12