Results 1  10
of
266
Model selection and model averaging in phylogenetics: Advantages of the AIC and Bayesian approaches over likelihood ratio tests. Syst. Biol
, 2004
"... Abstract.—Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects o ..."
Abstract

Cited by 180 (5 self)
 Add to MetaCart
(Show Context)
Abstract.—Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (modelaveraged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AICbased model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus (genus Carabus) ground beetles described by Sota and Vogler (2001). [AIC; Bayes factors; BIC; likelihood ratio tests; model averaging; model uncertainty; model selection; multimodel inference.] It is clear that models of nucleotide substitution (henceforth models of evolution) play a significant role
Robust parameter estimation in computer vision
 SIAM Reviews
, 1999
"... Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techni ..."
Abstract

Cited by 131 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are leastmedian of
Geometric Motion Segmentation and Model Selection
 Phil. Trans. Royal Society of London A
, 1998
"... this paper we place the three problems into a common statistical framework; investigating the use of information criteria and robust mixture models as a principled way for motion segmentation of images. The final result is a general fully automatic algorithm for clustering that works in the presence ..."
Abstract

Cited by 109 (2 self)
 Add to MetaCart
this paper we place the three problems into a common statistical framework; investigating the use of information criteria and robust mixture models as a principled way for motion segmentation of images. The final result is a general fully automatic algorithm for clustering that works in the presence of noise and outliers. 1. Introduction
Comparing Dynamic Causal Models
 NEUROIMAGE
, 2004
"... This article describes the use of Bayes factors for comparing Dynamic Causal Models (DCMs). DCMs are used to make inferences about effective connectivity from functional Magnetic Resonance Imaging (fMRI) data. These inferences, however, are contingent upon assumptions about model structure, that is, ..."
Abstract

Cited by 86 (33 self)
 Add to MetaCart
(Show Context)
This article describes the use of Bayes factors for comparing Dynamic Causal Models (DCMs). DCMs are used to make inferences about effective connectivity from functional Magnetic Resonance Imaging (fMRI) data. These inferences, however, are contingent upon assumptions about model structure, that is, the connectivity pattern between the regions included in the model. Given the current lack of detailed knowledge on anatomical connectivity in the human brain, there are often considerable degrees of freedom when defining the connectional structure of DCMs. In addition, many plausible scientific hypotheses may exist about which connections are changed by experimental manipulation, and a formal procedure for directly comparing these competing hypotheses is highly desirable. In this article, we show how Bayes factors can be used to guide choices about model structure, both with regard to the intrinsic connectivity pattern and the contextual modulation of individual connections. The combined use of Bayes factors and DCM thus allows one to evaluate competing scientific theories about the architecture of largescale neural networks and the neuronal interactions that mediate perception and cognition.
Akaike’s information criterion and recent developments in information complexity
 Journal of Mathematical Psychology
"... criterion (AIC). Then, we present some recent developments on a new entropic or information complexity (ICOMP) criterion of Bozdogan (1988a, 1988b, 1990, 1994d, 1996, 1998a, 1998b) for model selection. A rationale for ICOMP as a model selection criterion is that it combines a badnessoffit term (su ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
criterion (AIC). Then, we present some recent developments on a new entropic or information complexity (ICOMP) criterion of Bozdogan (1988a, 1988b, 1990, 1994d, 1996, 1998a, 1998b) for model selection. A rationale for ICOMP as a model selection criterion is that it combines a badnessoffit term (such as minus twice the maximum log likelihood) with a measure of complexity of a model differently than AIC, or its variants, by taking into account the interdependencies of the parameter estimates as well as the dependencies of the model residuals. We operationalize the general form of ICOMP based on the quantification of the concept of overall model complexity in terms of the estimated inverseFisher information matrix. This approach results in an approximation to the sum of two KullbackLeibler distances. Using the correlational form of the complexity, we further provide yet another form of ICOMP to take into account the interdependencies (i.e., correlations) among the parameter estimates of the model. Later, we illustrate the practical utility and the importance of this new model selection criterion by providing several
Randomeffects analysis
 In
, 2004
"... of the structural measures of flexibility and agility using a measurement theoretical framework $ ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
of the structural measures of flexibility and agility using a measurement theoretical framework $
Evaluating the fit of structural equation models: Tests of significance and descriptive goodnessoffit measures
 Methods of Psychological Research
, 2003
"... For structural equation models, a huge variety of fit indices has been developed. These indices, however, can point to conflicting conclusions about the extent to which a model actually matches the observed data. The present article provides some guidelines that should help applied researchers to e ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
(Show Context)
For structural equation models, a huge variety of fit indices has been developed. These indices, however, can point to conflicting conclusions about the extent to which a model actually matches the observed data. The present article provides some guidelines that should help applied researchers to evaluate the adequacy of a given structural equation model. First, as goodnessoffit measures depend on the method used for parameter estimation, maximum likelihood (ML) and weighted least squares (WLS) methods are introduced in the context of structural equation modeling. Then, the most common goodnessoffit indices are discussed and some recommendations for practitioners given. Finally, we generated an artificial data set according to a "true" model and analyzed two misspecified and two correctly specified models as examples of poor model fit, adequate fit, and good fit.
An assessment of information criteria for motion model selection
 In: Proceedings of IEEE Conference on Computer Vision Pattern Recognition. Puerto Rico
, 1997
"... ..."
(Show Context)
Distributional assumptions of growth mixture models: Implications for overextraction of latent trajectory classes
 Psychological Methods
, 2003
"... Growth mixture models are often used to determine if subgroups exist within the population that follow qualitatively distinct developmental trajectories. However, statistical theory developed for finite normal mixture models suggests that latent trajectory classes can be estimated even in the absenc ..."
Abstract

Cited by 53 (7 self)
 Add to MetaCart
(Show Context)
Growth mixture models are often used to determine if subgroups exist within the population that follow qualitatively distinct developmental trajectories. However, statistical theory developed for finite normal mixture models suggests that latent trajectory classes can be estimated even in the absence of population heterogeneity if the distribution of the repeated measures is nonnormal. By drawing on this theory, this article demonstrates that multiple trajectory classes can be estimated and appear optimal for nonnormal data even when only 1 group exists in the population. Further, the withinclass parameter estimates obtained from these models are largely uninterpretable. Significant predictive relationships may be obscured or spurious relationships identified. The implications of these results for applied research are highlighted, and future directions for quantitative developments are suggested. Over the last decade, random coefficient growth modeling has become a centerpiece of longitudinal data analysis. These models have been adopted enthusiastically by applied psychological researchers in part because they provide a more dynamic analysis of repeated measures data than do many traditional techniques. However, these methods are not ideally suited for testing theories that posit the existence of qualitatively different developmental pathways, that is, theories in which distinct developmental pathways are thought to hold within subpopulations. One widely cited theory of this type is Moffitt’s (1993) distinction between “lifecourse persistent ” and “adolescentlimited ” antisocial behavior trajectories. Moffitt’s theory is prototypical of other developmental taxonomies that have been proposed in such diverse areas as developmental psychopathology (Schulenberg,
Key Concepts in Model Selection: Performance and Generalizability
 Journal of Mathematical Psychology
, 2000
"... methods of model selection, and how do they work? Which methods perform better than others, and in what circumstances? These questions rest on a number of key concepts in a relatively underdeveloped field. The aim of this essay is to explain some background concepts, highlight some of the results in ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
(Show Context)
methods of model selection, and how do they work? Which methods perform better than others, and in what circumstances? These questions rest on a number of key concepts in a relatively underdeveloped field. The aim of this essay is to explain some background concepts, highlight some of the results in this special issue, and to add my own. The standard methods of model selection include classical hypothesis testing, maximum likelihood, Bayes method, minimum description length, crossvalidation and Akaike’s information criterion. They all provide an implementation of Occam’s razor, in which parsimony or simplicity is balanced against goodnessoffit. These methods primarily take account of the sampling errors in parameter estimation, although their relative success at this task depends on the circumstances. However, the aim of model selection should also include the ability of a model to generalize to predictions in a different domain. Errors of extrapolation, or generalization, are different from errors of parameter estimation. So, it seems that simplicity and parsimony may be an additional factor in managing these errors, in which case the standard methods of model selection are incomplete implementations of Occam’s razor. 1. WHAT IS MODEL SELECTION? William of Ockham (1285 1347/49) will always be remembered for his famous postulations of Ockham’s razor (also spelled ‘Occam’), which states that entities are not to be multiplied beyond necessity. In a similar vein, Sir Isaac Newton’s first rule of hypothesizing instructs us that we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. While they This paper is derived from a presentation at the Methods of Model Selection symposium at Indiana University