Results

**11 - 12**of**12**### Statistical Inference For A Computational Model Of Cognition

, 1998

"... this paper. Ultimately, we shall be concerned with measuring the evidence in favor of the hypothesis that changing a single parameter in the net suffices to explain the effect of amphetamines. We shall assess the evidence in favor of this hypothesis in two ways. First, we use Bayes factors (Jeffreys ..."

Abstract
- Add to MetaCart

this paper. Ultimately, we shall be concerned with measuring the evidence in favor of the hypothesis that changing a single parameter in the net suffices to explain the effect of amphetamines. We shall assess the evidence in favor of this hypothesis in two ways. First, we use Bayes factors (Jeffreys 1961, Kass and Raftery 1995). Our reasons for using Bayes factors are (i) they provide a measure of evidence in favor of and against the null hypothesis (in contrast to p-values, which measure evidence against the null); (ii) the model does not satisfy the usual regularity condition, and hence frequentist tests, which generally are based on asymptotic theory, will not be reliable. This is not to say that Bayes factors do not themselves have some problems; we discuss these points further in Section 3. Second, we examine the posterior distribution for a key parameter in the model that corresponds to the question of scientific interest. To evaluate the Bayes factor, we will need to simulate from the posterior distribution for the parameters of the net. To do so, we use a Markov chain Monte Carlo (MCMC). One complication is that the likelihood function cannot be evaluated explicitly but must itself be approximated by simulation as in Diggle and Gratton (1984), for example. This introduces noise into the MCMC. In Section 4, we propose a method for correcting for this noise. In Section 5, we present the results of our methods as applied to our particular data set. We close with some final remarks in Section 6. 2. THE MODEL AND THE EXPERIMENT 2.1. The Eriksen Task. In a variation of the Eriksen task (Eriksen and Eriksen 1974), subjects identify a target letter (S or H) at the center of a five-letter stimulus array. In the compatible condition, the surrounding letters are identica...

### Accounting for Model Uncertainty via Trans-dimensional Genetic Algorithms

"... We develop for regression models trans-dimensional genetic algorithms for the exploration of large model spaces. Our algorithms can be used in two different ways. The first possibility is to search the best model according to some criteria such as AIC or BIC. The second possibility is to use our alg ..."

Abstract
- Add to MetaCart

We develop for regression models trans-dimensional genetic algorithms for the exploration of large model spaces. Our algorithms can be used in two different ways. The first possibility is to search the best model according to some criteria such as AIC or BIC. The second possibility is to use our algorithms to explore the model space, search for the most probable models and estimate their posterior probabilities. This is accomplished by the use of genetic operators embedded in a reversible jump Markov chain Monte Carlo algorithm in the model space with several chains. As these chains run simultaneously and learn from each other via the genetic operators, our algorithm efficiently explores the large model space and easily escapes local maxima regions common in the presence of highly correlated regressors. We illustrate the power of our trans-dimensional genetic algorithms with applications to two real data sets.