Results 1 
6 of
6
Assessing model mimicry using the parametric bootstrap
 Journal of Mathematical Psychology
, 2004
"... We present a general sampling procedure to quantify model mimicry, defined as the ability of a model to account for data generated by a competing model. This sampling procedure, called the parametric bootstrap crossfitting method (PBCM; cf. Williams (J. R. Statist. Soc. B 32 (1970) 350; Biometrics ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We present a general sampling procedure to quantify model mimicry, defined as the ability of a model to account for data generated by a competing model. This sampling procedure, called the parametric bootstrap crossfitting method (PBCM; cf. Williams (J. R. Statist. Soc. B 32 (1970) 350; Biometrics 26 (1970) 23)), generates distributions of differences in goodnessoffit expected under each of the competing models. In the data informed version of the PBCM, the generating models have specific parameter values obtained by fitting the experimental data under consideration. The data informed difference distributions can be compared to the observed difference in goodnessoffit to allow a quantification of model adequacy. In the data uninformed version of the PBCM, the generating models have a relatively broad range of parameter values based on prior knowledge. Application of both the data informed and the data uninformed PBCM is illustrated with several examples. r 2003 Elsevier Inc. All rights reserved. 1.
Discrepancy Risk Model Selection Test Theory For Comparing Possibly Misspecified Or Nonnested Models
"... An extension of Vuong's model selection theory, Discrepancy Risk Model Selection Test (DRMST) Theory, is developed for testing the null hypothesis that two given probability models fit some underlying data generating process equally effectively with respect to a prespecified significance level. DRM ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
An extension of Vuong's model selection theory, Discrepancy Risk Model Selection Test (DRMST) Theory, is developed for testing the null hypothesis that two given probability models fit some underlying data generating process equally effectively with respect to a prespecified significance level. DRMST theory is applicable to a wide range of goodnessoffit (i.e., discrepancy risk) functions and is applicable in situations where the models might be nonnested or misspecified. Moreover, DRMST theory is applicable to statistical environments where the observations are identically distributed but not necessarily independent. Key words: asymptotic statistical theory, model selection, hypothesistesting, model misspecification 3 Introduction Let\Omega be a set of probability distributions. Let the distribution generating the data be the distinguished "environmental distribution" p e 2 \Omega\Gamma Define a "probability model" , M \Theta , (i.e., a "family of approximating distributions " f...
Knowledge Digraph Contribution Analysis of Protocol Data
, 1998
"... A knowledge digraph defines a set of semantic (or syntactic) associative relationships among propositions in a text (e.g., Graesser and Clark (1985) conceptual graph structures and the causal network analysis of Trabasso & van den Broek, 1985). This paper introduces the Knowledge Digraph Contributio ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A knowledge digraph defines a set of semantic (or syntactic) associative relationships among propositions in a text (e.g., Graesser and Clark (1985) conceptual graph structures and the causal network analysis of Trabasso & van den Broek, 1985). This paper introduces the Knowledge Digraph Contribution (KDC) data analysis methodology for quantitatively measuring the degree to which a given knowledge digraph can account for the occurrence of specific sequences of propositions in recall, summarization, talkaloud, and questionanswering protocol data. KDC data analysis provides statistical tests for selecting the knowledge digraph which "bestfits" a given data set. KDC data analysis also allows one to test hypotheses about the relative contributions of each member in a set of knowledge digraphs. The validity of specific knowledge digraph representational assumptions may be evaluated by comparing human protocol data with protocol data generated by sampling from the KDC distribution. Specifi...
Goodnessoffit and confidence intervals of approximate models
"... To test whether the model fits the data well, a goodnessoffit (GOF) test can be used. The chisquare GOF test is often used to test the null hypothesis that a function describes the mean of the data well. The null hypothesis with this test is rejected too often, however, because the nominal signif ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
To test whether the model fits the data well, a goodnessoffit (GOF) test can be used. The chisquare GOF test is often used to test the null hypothesis that a function describes the mean of the data well. The null hypothesis with this test is rejected too often, however, because the nominal significance level (usually 0.05) is exceeded. Alternatively, the level of Hotelling’s test is accurate if a fixed hypothesis for the mean is available. In many situations, however, only an estimate of the mean is available, and so the level of Hotelling’s test may also be incorrect. An approximate version of Hotelling’s test is suggested as a GOF test. It is shown that this requires only an adjustment of the degrees of freedom of Hotelling’s original test. GOF tests assume that the model is either correct or incorrect whereas in model specification it is often assumed that the model is an approximation. Consequently, for approximate models a GOF test will mostly indicate that the model does not fit. It is therefore suggested that a measure of approximation to the true model could be used to get an indication of how bad the approximate model is. It is also shown that correct confidence intervals can be obtained from when using an approximate model. The results are applied to data from the daily news memory test. 1
The Wald Test and Cramér–Rao Bound for Misspecified Models in Electromagnetic Source Analysis
"... Abstract—By using signal processing techniques, an estimate of activity in the brain from the electro or magnetoencephalogram (EEG or MEG) can be obtained. For a proper analysis, a test is required to indicate whether the model for brain activity fits. A problem in using such tests is that often, ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—By using signal processing techniques, an estimate of activity in the brain from the electro or magnetoencephalogram (EEG or MEG) can be obtained. For a proper analysis, a test is required to indicate whether the model for brain activity fits. A problem in using such tests is that often, not all assumptions are satisfied, like the assumption of the number of shells in an EEG. In such a case, a test on the number of sources (model order) might still be of interest. A detailed analysis is presented of the Wald test for these cases. One of the advantages of the Wald test is that it can be used when not all assumptions are satisfied. Two different, previously suggested, Wald tests in electromagnetic source analysis (EMSA) are examined: a test on source amplitudes and a test on the closeness of source pairs. The Wald test is analytically studied in terms of alternative hypotheses that are close to the null hypothesis (local alternatives). It is shown that the Wald test is asymptotically unbiased, that it has the correct level and power, which makes it appropriate to use in EMSA. An accurate estimate of the Cramér–Rao bound (CRB) is required for the use of the Wald test when not all assumptions are satisfied. The sandwich CRB is used for this purpose. It is defined for nonseparable least squares with constraints required for the Wald test on amplitudes. Simulations with EEG show that when the sensor positions are incorrect, or the number of shells is incorrect, or the conductivity parameter is incorrect, then the CRB and Wald test are still good, with a moderate number of trials. Additionally, the CRB and Wald test appear robust against an incorrect assumption on the noise covariance. A combination of incorrect sensor positions and noise covariance affects the possibility of detecting a source with small amplitude. Index Terms—Approximate model, constrained optimization, Fisher information with constraints, model checking, parameter covariance, separable least squares, source localization. I.
Statistical Test For Statistical Tests for Comparing Possibly Misspecified and Nonnested Models
 Journal of Mathematical Psychology
, 2000
"... A Model Selection Criteria (MSC) involves selecting the model with the best "estimated goodnessoffit" to the data generating process. Following the method of Vuong (1989, Econometrica, 57, 307333), a large sample Model Selection Statistical Test (MST) is introduced that can be used in conjunction ..."
Abstract
 Add to MetaCart
A Model Selection Criteria (MSC) involves selecting the model with the best "estimated goodnessoffit" to the data generating process. Following the method of Vuong (1989, Econometrica, 57, 307333), a large sample Model Selection Statistical Test (MST) is introduced that can be used in conjunction with most existing MSC procedures to decide if the estimated goodnessoffit for one model is significantly different from the estimated goodnessoffit for another model. The MST extends the classical generalized likelihood ratio test, is valid in the presence of model misspecification, and is applicable to situations involving nonnested probability models. Simulation studies designed to illustrate the concept of the MST and its conservative decision rule (relative to the MSC method) are also presented. Statistical Test for Model Selection 2 Statistical Tests for Comparing Possibly Misspecified and Nonnested Models An important problem in model selection is concerned with identifying t...