Results 1  10
of
45
The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis
 Psychological Methods
, 1996
"... Monte Carlo computer simulations were used to investigate the performance of three X 2 test statistics in confirmatory factor analysis (CFA). Normal theory maximum likelihood)~2 (ML), Browne's asymptotic distribution free X 2 (ADF), and the SatorraBentler rescaled X 2 (SB) were examined under varyi ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
Monte Carlo computer simulations were used to investigate the performance of three X 2 test statistics in confirmatory factor analysis (CFA). Normal theory maximum likelihood)~2 (ML), Browne's asymptotic distribution free X 2 (ADF), and the SatorraBentler rescaled X 2 (SB) were examined under varying conditions of sample size, model specification, and multivariate distribution. For properly specified models, ML and SB showed no evidence of bias under normal distributions across all sample sizes, whereas ADF was biased at all but the largest sample sizes. ML was increasingly overestimated with increasing nonnormality, but both SB (at all sample sizes) and ADF (only at large sample sizes) showed no evidence of bias. For misspecified models, ML was again inflated with increasing nonnormality, but both SB and ADF were underestimated with increasing nonnormality. It appears that the power of the SB and ADF test statistics to detect a model misspecification is attenuated given nonnormally distributed data. Confirmatory factor analysis (CFA) has become an increasingly popular method of investigating the structure of data sets in psychology. In contrast to traditional exploratory factor analysis that does not place strong a priori restrictions on the structure of the model being tested, CFA requires the investigator to specify both the number of factors
Personality similarity in twins reared apart and together
 and Evolutionary Psychology – ISSN 14747049 – Volume 1. 2003. 94 One’s Twin: Perceived Social Closeness and Familiarity
, 1988
"... We administered the Multidimensional Personality Questionnaire (MPQ) to 217 monozygotic and 114 dizygotic rearedtogether adult twin pairs and 44 monozygotic and 27 dizygotic rearedapart adult twin pairs. A fourparameter biometric model (incorporating genetic, additive versus nonadditive, shared f ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
We administered the Multidimensional Personality Questionnaire (MPQ) to 217 monozygotic and 114 dizygotic rearedtogether adult twin pairs and 44 monozygotic and 27 dizygotic rearedapart adult twin pairs. A fourparameter biometric model (incorporating genetic, additive versus nonadditive, shared familyenvironment, and unshared environment components) and five reduced models were fitted through maximumlikelihood techniques to data obtained with the 11 primary MPQ scales and its 3 higher order scales. Solely environmental models did not fit any of the scales. Although the other reduced models, including the simple additive model, did fit many of the scales, only the full model provided a satisfactory fit for all scales. Heritabilities estimated by the full model ranged from.39 to.58. Consistent with previous reports, but contrary to widely held beliefs, the overall contribution of a common familyenvironment component was small and negligible for all but 2 of the 14 personality measures. Evidence of significant nonadditive genetic effects, possibly emergenic (epistatic) in nature, was obtained for 3 of the measures. Until recently, almost all knowledge regarding environmental and genetic causal influences on stable personality traits has come from studies of twins reared together. The findings have
Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance
 Psychological Bulletin
, 1989
"... Addresses issues related to partial measurement in variance using a tutorial approach based on the LISREL confirmatory factor analytic model. Specifically, we demonstrate procedures for (a) using "sensitivity analyses " to establish stable and substantively wellfitting baseline models, (b) determin ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
Addresses issues related to partial measurement in variance using a tutorial approach based on the LISREL confirmatory factor analytic model. Specifically, we demonstrate procedures for (a) using "sensitivity analyses " to establish stable and substantively wellfitting baseline models, (b) determining partially invariant measurement parameters, and (c) testing for the invariance of factor covariance and mean structures, given partial measurement invariance. We also show, explicitly, the transformation of parameters from an all^fto an ally model specification, for purposes of testing mean structures. These procedures are illustrated with multidimensional selfconcept data from low ( « = 248) and high (n = 582) academically tracked high school adolescents. An important assumption in testing for mean differences is that the measurement (Drasgow & Kanfer, 1985; Labouvie,
Bayesian model selection in structural equation models
, 1993
"... A Bayesian approach to model selection for structural equation models is outlined. This enables us to compare individual models, nested or nonnested, and also to search through the (perhaps vast) set of possible models for the best ones. The approach selects several models rather than just one, whe ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
A Bayesian approach to model selection for structural equation models is outlined. This enables us to compare individual models, nested or nonnested, and also to search through the (perhaps vast) set of possible models for the best ones. The approach selects several models rather than just one, when appropriate, and so enables us to take account, both informally and formally, of uncertainty about model structure when making inferences about quantities of interest. The approach tends to select simpler models than strategies based on multiple Pvaluebased tests. It may thus help to overcome the criticism of structural
The Theoretical Status of Latent Variables
 Psychological Review
, 2003
"... This article examines the theoretical status of latent variables as used in modern test theory models. First, it is argued that a consistent interpretation of such models requires a realist ontology for latent variables. Second, the relation between latent variables and their indicators is discussed ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
This article examines the theoretical status of latent variables as used in modern test theory models. First, it is argued that a consistent interpretation of such models requires a realist ontology for latent variables. Second, the relation between latent variables and their indicators is discussed. It is maintained that this relation can be interpreted as a causal one but that in measurement models for interindividual differences the relation does not apply to the level of the individual person. To substantiate intraindividual causal conclusions, one must explicitly represent individual level processes in the measurement model. Several research strategies that may be useful in this respect are discussed, and a typology of constructs is proposed on the basis of this analysis. The need to link individual processes to latent variable models for interindividual differences is emphasized. Consider the following sentence: “Einstein would not have been able to come up with his e � mc 2 had he not possessed such an extraordinary intelligence. ” What does this sentence express? It relates observable behavior (Einstein’s writing e � mc 2)toan unobservable attribute (his extraordinary intelligence), and it does so by assigning to the unobservable attribute a causal role in
Measuring the Flow Construct in Online Environments: a Structural Modeling Approach
 Marketing Science
, 1998
"... This is a working paper. elab.vanderbilt.edu Measuring the Flow Construct in Online Environments: A Structural Modeling Approach Though marketers have made great strides in understanding the Internet, they still understand little about what makes for a compelling consumer experience online. Recently ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
This is a working paper. elab.vanderbilt.edu Measuring the Flow Construct in Online Environments: A Structural Modeling Approach Though marketers have made great strides in understanding the Internet, they still understand little about what makes for a compelling consumer experience online. Recently, the flow construct has been proposed as important for understanding consumer behavior on the World Wide Web. Although widely studied over the past twenty years, quantitative modeling efforts of the flow construct have been neither systematic nor comprehensive. In large parts, these efforts have been hampered by considerable confusion regarding the exact conceptual definition of flow. Lacking precise definition, it has been difficult to measure flow empirically, let alone apply the concept in practice. Following the conceptual model of flow proposed by Hoffman and Novak (1996), we conceptualize flow as a complex multidimensional construct characterized by directed relationships among a set of unidimensional constructs, most of which have previously been incorporated in various definitions of flow. In a quantitative modeling framework, we use data collected from a largesample Webbased consumer survey to measure this set of constructs, and fit a series of structural equation models that test Hoffman and Novak’s (1996) theory. The conceptual model is largely supported and the improved fit offered by the revised model provides additional insights into the antecedents and consequences of flow. A key insight from the paper is that the degree to which the online experience is compelling can be defined and measured. As such, our flow model may be useful both theoretically and in practice as marketers strive to decipher the secrets of commercial success in interactive online environments. 1
Sample Size for Multiple Regression: Obtaining Regression Coefficients That Are Accurate, Not Simply Significant
"... An approach to sample size planning for multiple regression is presented that emphasizes accuracy in parameter estimation (AIPE). The AIPE approach yields precise estimates of population parameters by providing necessary sample sizes in order for the likely widths of confidence intervals to be suffi ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
An approach to sample size planning for multiple regression is presented that emphasizes accuracy in parameter estimation (AIPE). The AIPE approach yields precise estimates of population parameters by providing necessary sample sizes in order for the likely widths of confidence intervals to be sufficiently narrow. One AIPE method yields a sample size such that the expected width of the confidence interval around the standardized population regression coefficient is equal to the width specified. An enhanced formulation ensures, with some stipulated probability, that the width of the confidence interval will be no larger than the width specified. Issues involving standardized regression coefficients and random predictors are discussed, as are the philosophical differences between AIPE and the power analytic approaches to sample size planning. Sample size estimation from a power analytic perspective is often performed by mindful researchers in order to have a reasonable probability of obtaining parameter estimates that are statistically significant. In general, the social sciences have slowly become more aware of the problems associated with underpowered studies and their corresponding Type II errors, which can yield misleading results in a given
Latent variable interaction and quadratic effect estimation: a twostep technique using structural equation.” (http://www.wright.edu/robert.ping/) Updated from Ping R
 Psychological Bulletin
, 1996
"... The author proposes an alternative estimation technique for latent variable interactions and quadraties. Available techniques for specifying these variables in structural equation models require adding variables or constraint equations that can produce specification tedium and errors or estimation d ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The author proposes an alternative estimation technique for latent variable interactions and quadraties. Available techniques for specifying these variables in structural equation models require adding variables or constraint equations that can produce specification tedium and errors or estimation difficulties. The proposed technique avoids these difficulties and may be useful for EQS, LISREL 7, and LISREL 8 users. First, measurement parameters for indicator Ioadings and errors of linear latent variables are estimated in a measurement model that excludes the interaction and quadratic variables. Next, these estimates are used to calculate values for the indicator loadings and error variances ofthe interaction and quadratic latent variables. Then, these calculated values are specified as constants in the structural model containing the interaction and quadratic variables. Interaction and quadratic effects are routinely reported for categorical independent variables (i.e., in analysis of variance) frequently to aid in the interpretation of significant main effects. However, interaction and quadratic effects are less frequently reported for continuous independent variables. Researchers have called for the inclusion of interaction and quadratic variables in models with continuous independent
The modelsize effect on traditional and modified tests of covariance structures
 Structural Equation Modeling
, 2007
"... According to Kenny and McCoach (2003), chisquare tests of structural equation models produce inflated Type I error rates when the degrees of freedom increase. So far, the amount of this bias in large models has not been quantified. In a Monte Carlo study of confirmatory factor models with a range o ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
According to Kenny and McCoach (2003), chisquare tests of structural equation models produce inflated Type I error rates when the degrees of freedom increase. So far, the amount of this bias in large models has not been quantified. In a Monte Carlo study of confirmatory factor models with a range of 48 to 960 degrees of freedom it was found that the traditional maximum likelihood ratio statistic, TML, overestimates nominal Type I error rates up to 70 % under conditions of multivariate normality. Some alternative statistics for the correction of modelsize effects were also investigated: the scaled Satorra–Bentler statistic, TSC; the adjusted Satorra–
Copula structure analysis based on robust and extreme dependence measures
, 2006
"... In this paper we extend the standard approach of correlation structure analysis in order to reduce the dimension of highdimensional statistical data. The classical assumption of a linear model for the distribution of a random vector is replaced by the weaker assumption of a model for the copula. For ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In this paper we extend the standard approach of correlation structure analysis in order to reduce the dimension of highdimensional statistical data. The classical assumption of a linear model for the distribution of a random vector is replaced by the weaker assumption of a model for the copula. For elliptical copulae a ’correlationlike ’ structure remains but different margins and nonexistence of moments are possible. Moreover, elliptical copulae allow also for a ’copula structure analysis ’ of dependence in extremes. After introducing the new concepts and deriving some theoretical results we observe in a simulation study the performance of the estimators: the theoretical asymptotic behavior of the statistics can be observed even for a sample of only 100 observations. Finally, we test our method on real financial data and explain differences between our copula based approach and the classical approach. Our new method yields a considerable dimension reduction also in nonlinear models.