Results 1 
2 of
2
Model selection in electromagnetic source analysis with an application to VEF’s
 IEEE Transactions on Biomedical Engineering
, 2002
"... Abstract — In electromagnetic source analysis it is necessary to determine how many sources are required to describe the EEG or MEG adequately. Model selection procedures (MSP’s, or goodness of fit procedures) give an estimate of the required number of sources. Existing and new MSP’s are evaluated i ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Abstract — In electromagnetic source analysis it is necessary to determine how many sources are required to describe the EEG or MEG adequately. Model selection procedures (MSP’s, or goodness of fit procedures) give an estimate of the required number of sources. Existing and new MSP’s are evaluated in different source and noise settings: two sources which are close or distant, and noise which is uncorrelated or correlated. The commonly used MSP residual variance is seen to be ineffective, that is it often selects too many sources. Alternatives like the adjusted Hotelling’s test, Bayes information criterion, and the Wald test on source amplitudes are seen to be effective. The adjusted Hotelling’s test is recommended if a conservative approach is taken, and MSP’s such as Bayes information criterion or the Wald test on source amplitudes are recommended if a more liberal approach is desirable. The MSP’s are applied to empirical data (visual evoked fields). I.
2005b. Goodnessoffit and confidence intervals of approximate models
 Journal of Mathematical Psychology
"... 1 If the model for the data are strictly speaking incorrect, then how can one test whether the model fits? Standard goodnessoffit (GOF) tests rely on strictly correct or incorrect models. But in practice the correct model is not assumed to be available. It would still be of interest to determine ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
1 If the model for the data are strictly speaking incorrect, then how can one test whether the model fits? Standard goodnessoffit (GOF) tests rely on strictly correct or incorrect models. But in practice the correct model is not assumed to be available. It would still be of interest to determine how good or how bad the approximation is. But how can this be achieved? If it is determined that a model is a good approximation and hence a good explanation of the data, how can reliable confidence intervals be constructed? In this paper an attempt is made to answer the above questions. Several GOF tests and methods of constructing confidence intervals are evaluated both in a simulation and with real data from the internet based daily news memory test. 1