#### DMCA

## Structural equation modeling in practice: a review and recommended two-step approach. (1988)

Venue: | Psychological Bulletin, |

Citations: | 1823 - 3 self |

### Citations

1482 |
Convergent and discriminant validation by the multitrait-multimethod matrix.
- Campbell, Fiske
- 1959
(Show Context)
Citation Context ... mean that it is the preferred way to accomplish the model-building task. In this article, we contend that there is much to gain in theory testing and the assessment of construct validity from separate estimation (and respecification) of the measurement model prior to the simultaneous estimation of the measurement and structural submodels. The measurement model in conjunction with the structural model enables a comprehensive, confirmatory assessment of construct validity (Bentler, 1978). The measurement model provides a confirmatory assessment of convergent validity and discriminant validity (Campbell & Fiske, 1959). Given acceptable convergent and discriminant validities, the test of the structural model then constitutes a confirmatory assessment of nomological validity (Campbell, 1960; Cronbach & Meehl, 1955). The organization of the article is as follows: As background to the two-step approach, we begin with a section in which we discuss the distinction between exploratory and confirmatory analysis, the distinction between complementary modeling approaches for theory testing versus predictive application, and some developments in estimation methods. Following this, we present the confirmatory measurem... |

1141 |
Psychometric Theory (2nd ed.).
- Nunnally
- 1978
(Show Context)
Citation Context ...verse pattern of large positive residuals will be observed with the indicators of this factor (representing underfitting). As another example, indicators that are multidimensional tend to have large normalized residuals (the result of either underfitting or overfilling) wilh indicators of more lhan one factor, which often represents the only large normalized residual for each of these other indicators. Useful adjuncts to the pattern of residuals are similarity (or proportionality) coefficients (Anderson & Gerbing, 1982; Hunter, 1973) and multiple-groups analysis (cf. Anderson & Gerbing, 1982; Nunnally, 1978), each of which can readily be computed wilh Ihe ITAN program (Gerbing & Hunler, 1987). A similarity coefficient, u,j, for any two indicators, x, and Xj, can be defined for a set of q indicators as 4 2 i1'2 (8) The value of this index ranges from -1.0 to +1.0, with values greater in magnilude indicating greater internal and external consistency for Ihe Iwo indicators. Thus, similarity coefficients are useful because they efficiently summarize the internal and external consistency of the indicators with one another. Alternate indicators of the same underlying factor, therefore, should have simi... |

755 |
Construct validity in psychological tests.
- Cronbach, Meehl
- 1955
(Show Context)
Citation Context ...te estimation (and respecification) of the measurement model prior to the simultaneous estimation of the measurement and structural submodels. The measurement model in conjunction with the structural model enables a comprehensive, confirmatory assessment of construct validity (Bentler, 1978). The measurement model provides a confirmatory assessment of convergent validity and discriminant validity (Campbell & Fiske, 1959). Given acceptable convergent and discriminant validities, the test of the structural model then constitutes a confirmatory assessment of nomological validity (Campbell, 1960; Cronbach & Meehl, 1955). The organization of the article is as follows: As background to the two-step approach, we begin with a section in which we discuss the distinction between exploratory and confirmatory analysis, the distinction between complementary modeling approaches for theory testing versus predictive application, and some developments in estimation methods. Following this, we present the confirmatory measurement model; discuss the need for unidimensional measurement; and then consider the areas of specification, assessment of fit, and respecification in turn. In the next section, after briefly reviewing ... |

480 |
Elements of Econometrics,
- Kmenta
- 1986
(Show Context)
Citation Context ...mainder of this article, a confirmatory two-step approach to theory testing and development using ML or GLS methods. Estimation Methods Since the inception of contemporary structural equation methodology in the middle 1960s (Bock & Bargmann, 1966; STRUCTURAL EQUATION MODELING IN PRACTICE 413 Joreskog, 1966, 1967), maximum likelihood has been the predominant estimation method. Under the assumption of a multivariate normal distribution of the observed variables, maximum likelihood estimators have the desirable asymptotic, or large-sample, properties of being unbiased, consistent, and efficient (Kmenta, 1971). Moreover, significance testing of the individual parameters is possible because estimates of the asymptotic standard errors of the parameter estimates can be obtained. Significance testing of overall model fit also is possible because the fit function is asymptotically distributed as chisquare, adjusted by a constant multiplier. Although maximum likelihood parameter estimates in at least moderately sized samples appear to be robust against a moderate violation of multivariate normality (Browne, 1984; Tanaka, 1984), the problem is that the asymptotic standard errors and overall chi-square tes... |

424 |
A leisurely look at the bootstrap, the jackknife, and cross-validation,”The
- Efron, Gong
- 1983
(Show Context)
Citation Context ... useful in accurately predicting individuals' standings on the components. Some shortcomings of the PLS approach also need to be mentioned. Neither an assumption of nor an assessment of unidimensional measurement (discussed in the next section) is made under a PLS approach. Therefore, the theoretical meaning imputed to the latent variables can be problematic. Furthermore, because it is a limited-information estimation method, PLS parameter estimates are not as efficient as full-information estimates (Fornell & Bookstein, 1982; Joreskog & Wold, 1982), and jackknife or bootstrap procedures (cf. Efron & Gong, 1983) are required to obtain estimates of the standard errors of the parameter estimates (Dijkstra, 1983). And no overall test of model fit is available. Finally, PLS estimates will be asymptotically correct only under the joint conditions of consistency (sample size becomes large) and consistency at large (the number of indicators per latent variable becomes large; Joreskog & Wold, 1982). In practice, the correlations between the latent variables will tend to be underestimated, whereas the correlations of the observed measures with their respective latent variables will tend to be overestimated (D... |

270 |
Factor analysis as a statistical method
- Lawley, Maxwell
- 1963
(Show Context)
Citation Context ...nted variance components. Because of this assumption, the amount of variance explained in the set of observed measures is not of primary concern. Reflecting this, full-information methods provide parameter estimates that best explain the observed covariances. Two further relative strengths of full-information approaches are that they provide the most efficient parameter estimates (Joreskog & Wold, 1982) and an overall test of model fit. Because of the underlying assumption of random error and measure specificity, however, there is inherent indeterminacy in the estimation of factor scores (cf. Lawley & Maxwell, 1971; McDonald & Mulaik, 1979;Steiger, 1979). This is not a concern in theory testing, whereas in predictive applications this will likely result in some loss of predictive accuracy. For application and prediction, a PLS approach has relative strength. Under this approach, one can assume that all observed measure variance is useful variance to be explained. That is, under a principal-component model, no random error variance or measure-specific variance (i.e., unique variance) is assumed. Parameters are estimated so as to maximize the variance explained in either the set of observed measures (refl... |

239 |
Significance of tests and goodness-of-fit in the analysis of covariance structures.
- Bentler, Bonett
- 1980
(Show Context)
Citation Context ...pondingly compromises the ability to make meaningful, causal inferences about the relations of the constructs to one another. As a final comparative advantage, separate assessments of the measurement model and the structural model preclude having good fit of one model compensate for (and potentially mask) poor fit of the other, which can occur with a one-step approach. Additional Considerations in Structural Model Interpretation Practical versus statistical significance. To this point, we have considered significance only from the perspective of formal, statistical tests. As has been noted by Bentler and Bonett (1980) and others (e.g., Joreskog, 1974), however, the value of the chisquare likelihood ratio statistic is directly dependent on sample size. Because of this, with large sample sizes, significant values can be obtained even though there are only trivial discrepancies between a model and the data. Similarly, with large sample sizes, a significant value for an SCOT may be obtained even when there is only a trivial difference between two nested structural models' explanations of the estimated construct covariances. Therefore, an indication of goodness of fit from a practical standpoint, such as that p... |

227 |
A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators.
- Muthen
- 1984
(Show Context)
Citation Context ...riances are expressed as correlations, the asymptotic standard errors and overall chi-square goodness-offit tests are not correct without adjustments to the estimation procedure (Bentler & Lee, 1983). A companion program to LISREL 7, PRELIS (Joreskog & Sorbom, 1987), can provide such adjustments. A second problem is the use of product-moment correlations when the observed variables cannot be regarded as continuous (cf. Babakus, Ferguson, & Joreskog, 1987). PRELIS also can account for this potential shortcoming of current usage by calculating the correct polychoric and polyserial coefficients (Muthen, 1984) and then adjusting the estimation procedure accordingly. In summary, these new estimation methods represent important theoretical advances. The degree, however, to which estimation methods that do not assume multivariate normality will supplant normal theory estimation methods in practice has yet to be determined. Many data sets may be adequately characterized by the multivariate normal, much as the univariate normal often adequately describes univariate distributions of data. And, as Bentler (1983) noted, referring to the weight matrix U, "an estimated optimal weight matrix should be adjuste... |

202 |
Asymptotically distribution-free methods for the analysis of covariance structures.
- Browne
- 1984
(Show Context)
Citation Context ...approach. Considerations in specification, assessment of fit, and respecification of measurement models using confirmatory factor analysis are reviewed. As background to the two-step approach, the distinction between exploratory and confirmatory analysis, the distinction between complementary approaches for theory testing versus predictive application, and some developments in estimation methods also are discussed. Substantive use of structural equation modeling has been growing in psychology and the social sciences. One reason for this is that these confirmatory methods (e.g., Bentler, 1983; Browne, 1984; Joreskog, 1978)provide researchers withacomprehensive means for assessing and modifying theoretical models. As such, they offer great potential for furthering theory development. Because of their relative sophistication, however, a number of problems and pitfalls in their application can hinder this potential from being realized. The purpose of this article is to provide some guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development. We present a comprehensive, two-step modeling approach that provides a basis for making mea... |

185 | Cross-validation of covariance structures. - Cudeck, Browne - 1983 |

184 |
Soft modeling: The basic design and some extensions. In
- Wold
- 1982
(Show Context)
Citation Context ...le drawn from the population to which the results are to be generalized. This cross-validation would be accomplished by specifying the same model with freely estimated parameters or, in what represents the quintessential confirmatory analysis, the same model with the parameter estimates constrained to the previously estimated values. Complementary Approaches for Theory Testing Versus Predictive Application A fundamental distinction can be made between the use of structural equation modeling for theory testing and development versus predictive application (Fornell & Bookstein, 1982; Joreskog & Wold, 1982). This distinction and its implications concern a basic choice of estimation method and underlying model. For clarity, we can characterize this choice as one between a full-information (ML or GLS) estimation approach (e.g., Bentler, 1983; Joreskog, 1978) in conjunction with the common factor model (Harman, 1976) and a partial least squares (PLS) estimation approach (e.g., Wold, 1982) in conjunction with the principal-component model (Harman, 1976). For theory testing and development, the ML or GLS approach has several relative strengths. Under the common factor model, observed measures are ass... |

169 |
Representing and testing organizational theories: A holistic construal,”
- Bagozzi, Philips
- 1982
(Show Context)
Citation Context ...efficient on its posited underlying construct factor is significant (greater than twice its standard error). Discriminant validity can be assessed for two estimated constructs by constraining the estimated correlation parameter (0S) between them to 1.0 and then performing a chi-square difference test on the values obtained for the constrained and unconstrained models (Joreskog, 1971). "A significantly lower x2 value for the model in which the trait correlations are not constrained to unity would indicate that the traits are not perfectly correlated and that discriminant validity is achieved" (Bagozzi & Phillips, 1982, p. 476). Although this is a necessary condition for demonstrating discriminant validity, the practical significance of this difference will depend on the research setting. This test should be performed for one pair of factors at a time, rather than as a simultaneous test of all pairs of interest.2 The reason for this is that a nonsignificant value for one pair of factors can be obfuscated by being tested with several pairs that have significant values. A complementary assessment of discriminant validity is to determine whether the confidence interval (±two standard errors) around the correla... |

164 |
LISREL VI: Analysis of linear structural relationships by maximum likelihood, instrumental variables, and least squares methods.
- Joreskog, Sorbom
- 1986
(Show Context)
Citation Context ...n methods in practice, weighing the trade-offs between the reasonableness of an underlying normal theory assumption and the limitations of arbitrary theory methods (e.g., constraints on model size and the need for larger sample sizes, which we discuss later in the next section). Confirmatory Measurement Models A confirmatory factor analysis model, or confirmatory measurement model, specifies the posited relations of the observed variables to the underlying constructs, with the constructs allowed to intercorrelate freely. Using the LISREL program notation, this model can be given directly from Joreskog and Sorbom (1984, pp. 1.9-10) as x = A£ + S, (3) where x is a vector of q observed measures, $ is a vector of n underlying factors such that n<q, A is a g X H matrix of pattern coefficients or factor loadings relating the observed measures to the underlying construct factors, and 5 is a vector of q variables that represents random measurement error and measure specificity. It is assumed for this model that E(£ S) = 0. The variancecovariance matrix for x, defined as 2, is 2 = (4) where * is the nXn covariance matrix off and Qs is the diagonal q X q covariance matrix of i. Need for Unidimensional Measurement Ac... |

146 | Methodology review: Assessing unidimensionality of tests and items.
- Hattie
- 1985
(Show Context)
Citation Context ...matrix for x, defined as 2, is 2 = (4) where * is the nXn covariance matrix off and Qs is the diagonal q X q covariance matrix of i. Need for Unidimensional Measurement Achieving unidimensional measurement (cf. Anderson & Gerbing, 1982; Hunter & Gerbing, 1982) is a crucial undertaking in theory testing and development. A necessary condition for assigning meaning to estimated constructs is that the measures that are posited as alternate indicators of each construct must be acceptably unidimensional. That is, each set of alternate indicators has only one underlying trait or construct in common (Hattie, 1985; McDonald, 1981). Two criteria, each representing necessary conditions, are used in assessing unidimensionality: internal consistency and external consistency. The internal consistency criterion can be presented in the following fundamental equation (Hart & Spearman, 1913, p. 58; Spearman, 1914, p. 107): Pac _ Pbc Pad Pbd (5) where a, b, c, and rfare measures of the same construct, {. This equality should hold to within sampling error (Spearman & Holzinger, 1924), and at least four measures of a construct are needed for an assessment. A related equation is the product rule for internal consis... |

114 |
Statistical analysis of sets of congeneric tests.
- Joreskog
- 1971
(Show Context)
Citation Context ... a predicted covariance matrix for any specified model and set of parameter estimates. In building measurement models, multiple-indicator measurement models (Anderson & Gerbing, 1982; Hunter & Gerbing, 1982) are preferred because they allow the most unambiguous assignment of meaning to the estimated constructs. The reason for this is that with multiple-indicator measurement models, each estimated construct is denned by at least two measures, and each measure is intended as an estimate of only one construct. Unidimensional measures of this type have been referred to as congeneric measurements (Joreskog, 1971). By contrast, measurement models that contain correlated measurement errors or that have indicators that load on more than one estimated construct do not represent unidimensional construct measurement (Gerbing & Anderson, 1984). As a result, assignment of meaning to such estimated constructs can be problematic (cf. Bagozzi, 1983; Fomell, 1983; Gerbing & Anderson, 1984). Some dissent, however, exists about the application of the confirmatory factor analysis model for assessing unidimensionality. Cattell (1973, 1978) has argued that individual measures or items, like real-life behaviors, tend t... |

113 |
The effect of sampling error on convergence, improper solutions, and goodness-of-nt indices for maximum likelihood confirmatory factor analysis.
- Anderson, Gerbing
- 1984
(Show Context)
Citation Context ...er conservative alternative to consider is to set 9, for the single indicator at the smallest value found for the other, estimated error variances (9(). Although this value is still arbitrary, it has the advantage of being based on information specific to the given research context. That is, this indicator shares a respondent sample and survey instrument with the other indicators. Sample size needed. Because full-information estimation methods depend on large-sample properties, a natural concern is the sample size needed to obtain meaningful parameter estimates. In a recent Monte Carlo study, Anderson and Gerbing (1984) and Gerbing and Anderson (1985) have investigated ML estimation for a number of sample sizes and a variety of confirmatory factor models in which the normal theory assumption was fully met. The results of this study were that although the bias in parameter estimates is of no practical significance for sample sizes as low as 50, for a given sample, the deviations of the parameter estimates from their respective population values can be quite large. Whereas this does not present a problem in statistical inference, because the standard errors computed by the LISREL program are adjusted according... |

107 | The scientific use of factor analysis in behavioral and life sciences. - Cattell - 1978 |

93 |
Some methods for respecifying measurement models to obtain unidimensional construct measurement.
- Anderson, Gerbing
- 1982
(Show Context)
Citation Context ...cation, however, a number of problems and pitfalls in their application can hinder this potential from being realized. The purpose of this article is to provide some guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development. We present a comprehensive, two-step modeling approach that provides a basis for making meaningful inferences about theoretical constructs and their interrelations, as well as avoiding some specious inferences. The model-building task can be thought of as the analysis of two conceptually distinct models (Anderson & Gerbing, 1982; Joreskog & Sorbom, 1984). A confirmatory measurement, or factor analysis, model specifies the relations of the observed measures to their posited underlying constructs, with the constructs allowed to intercorrelate freely. A confirmatory structural model then specifies the causal relations of the constructs to one another, as posited by some theory. With full-information estimation methods, such as those provided in the EQS (Bentler, 1985) or LISREL (Joreskog & Sorbom, 1984) programs, the measurement and structural submodels can be estimated simultaneously. The ability to do this in a one-st... |

93 |
Specification searches in covariance structure modeling.
- MacCallum
- 1986
(Show Context)
Citation Context ...sal paths are specified as zero and there is acceptable fit, one can advance qualified causal interpretations. The SCDT comparison of Mc — Mt provides further understanding of the explanatory ability afforded by the theoretical model of interest and, irrespective of the outcome of the Mt - Ms comparison, would be considered next. Bagozzi (1984) recently noted the need to consider rival hypotheses in theory construction and stressed that whenever possible, these rival explanations should be tested within the same study. Apart from this but again stressing the need to assess alternative models, MacCallum (1986) concluded from his research on specification searches that "investigators should not interpret a nonsignificant chi-square as a signal to stop a specification search" (p. 118). SCDTs are particularly well-suited for accomplishing these comparisons between alternative theoretical models. Consider first the upper branch of the decision tree in Figure 1, that is, the null hypothesis that Mt - M, = 0 is not rejected. Given this, when both the Mc — M, and the Mc — Ms comparisons also are not significant, Mc would be accepted because it is the most parsimonious structural model of the three hypothe... |

82 |
Modern factor analysis (3rd ed.).
- Harman
- 1976
(Show Context)
Citation Context ...iously estimated values. Complementary Approaches for Theory Testing Versus Predictive Application A fundamental distinction can be made between the use of structural equation modeling for theory testing and development versus predictive application (Fornell & Bookstein, 1982; Joreskog & Wold, 1982). This distinction and its implications concern a basic choice of estimation method and underlying model. For clarity, we can characterize this choice as one between a full-information (ML or GLS) estimation approach (e.g., Bentler, 1983; Joreskog, 1978) in conjunction with the common factor model (Harman, 1976) and a partial least squares (PLS) estimation approach (e.g., Wold, 1982) in conjunction with the principal-component model (Harman, 1976). For theory testing and development, the ML or GLS approach has several relative strengths. Under the common factor model, observed measures are assumed to have random error variance and measure-specific variance components (referred to together as uniqueness in the factor analytic literature, e.g., Harman, 1976) that are not of theoretical interest. This unwanted part of the observed measures is excluded from the definition of the latent constructs and is ... |

72 |
Generalized least squares estimators in the analysis of covariance structures.
- Browne
- 1974
(Show Context)
Citation Context ...on of Equation 2 are minimized through iterative algorithms (cf. Rentier, 1986b). The specific GLS method of estimation is specified by the value of U in Equation 2. Specifying U as I implies that minimizing F is the minimization of the sum of squared residuals, that is, ordinary, or "unweighted," least squares estimation. Alternately, when it is updated as a function of the most recent parameter estimates obtained at each iteration during the estimation process, U can be chosen so that minimizing Equation 2 is asymptotically equivalent to minimizing the likelihood fit function of Equation 1 (Browne, 1974; Lee & Jennrich, 1979). Other choices of U result in estimation procedures that do not assume multivariate normality. The most general procedure, provided by Browne (1984), yields asymptotically distribution-free (ADF) "best" generalized least squares estimates, with corresponding statistical tests that are "asymptotically insensitive to the distribution of the observations" (p. 62). These estimators are provided by the EQS program and the LISREL 7 program. The EQS program refers to these ADF GLS estimators as arbitrary distribution theory generalized least squares (AGLS; Bentler, 1985), wher... |

68 |
Structural analysis of covariance and correlation matrices.
- Joreskog
- 1978
(Show Context)
Citation Context ...iderations in specification, assessment of fit, and respecification of measurement models using confirmatory factor analysis are reviewed. As background to the two-step approach, the distinction between exploratory and confirmatory analysis, the distinction between complementary approaches for theory testing versus predictive application, and some developments in estimation methods also are discussed. Substantive use of structural equation modeling has been growing in psychology and the social sciences. One reason for this is that these confirmatory methods (e.g., Bentler, 1983; Browne, 1984; Joreskog, 1978)provide researchers withacomprehensive means for assessing and modifying theoretical models. As such, they offer great potential for furthering theory development. Because of their relative sophistication, however, a number of problems and pitfalls in their application can hinder this potential from being realized. The purpose of this article is to provide some guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development. We present a comprehensive, two-step modeling approach that provides a basis for making meaningful inference... |

64 | On the multivariate asymptotic distribution of sequential chi-square statistics.
- Steiger, Shapiro, et al.
- 1985
(Show Context)
Citation Context ...r standardized residuals (and the absolute values of Ihe largest ones). One-Step Versus Two-Step Modeling Approaches The primary contention of this article is thai much is to be gained from separate estimation and respecification of the measuremenl model prior to the simultaneous estimation of the measurement and struclural submodels. In putting forth a specific two-step approach, we use the concepls of nested models, pseudo chi-square tests, and sequential chi-square difference tests (SCDTs) and draw on some recent work from quantitative 418 JAMES C. ANDERSON AND DAVID W. GERBING psychology (Steiger, Shapiro, & Browne, 1985). These tests enable a separate assessment of the adequacy of the substantive model of interest, apart from that of the measurement model. We first present the structural model and discuss the concept of interpretational confounding (Burt, 1973,1976). A confirmatory structural model that specifies the posited causal relations of the estimated constructs to one another can be given directly from Joreskog and Sorbom (1984, p. 1.5). This model can be expressed as (9) where i? is a vector of m endogenous constructs, {is a vector of n exogenous constructs, B is an m X m matrix of coefficients repr... |

61 |
Recommendations for APA test standards regarding construct, trait, and discriminant validity.
- Campbell
- 1960
(Show Context)
Citation Context ...dity from separate estimation (and respecification) of the measurement model prior to the simultaneous estimation of the measurement and structural submodels. The measurement model in conjunction with the structural model enables a comprehensive, confirmatory assessment of construct validity (Bentler, 1978). The measurement model provides a confirmatory assessment of convergent validity and discriminant validity (Campbell & Fiske, 1959). Given acceptable convergent and discriminant validities, the test of the structural model then constitutes a confirmatory assessment of nomological validity (Campbell, 1960; Cronbach & Meehl, 1955). The organization of the article is as follows: As background to the two-step approach, we begin with a section in which we discuss the distinction between exploratory and confirmatory analysis, the distinction between complementary modeling approaches for theory testing versus predictive application, and some developments in estimation methods. Following this, we present the confirmatory measurement model; discuss the need for unidimensional measurement; and then consider the areas of specification, assessment of fit, and respecification in turn. In the next section,... |

61 |
The dimensionality of tests and items.
- McDonald
- 1981
(Show Context)
Citation Context ...defined as 2, is 2 = (4) where * is the nXn covariance matrix off and Qs is the diagonal q X q covariance matrix of i. Need for Unidimensional Measurement Achieving unidimensional measurement (cf. Anderson & Gerbing, 1982; Hunter & Gerbing, 1982) is a crucial undertaking in theory testing and development. A necessary condition for assigning meaning to estimated constructs is that the measures that are posited as alternate indicators of each construct must be acceptably unidimensional. That is, each set of alternate indicators has only one underlying trait or construct in common (Hattie, 1985; McDonald, 1981). Two criteria, each representing necessary conditions, are used in assessing unidimensionality: internal consistency and external consistency. The internal consistency criterion can be presented in the following fundamental equation (Hart & Spearman, 1913, p. 58; Spearman, 1914, p. 107): Pac _ Pbc Pad Pbd (5) where a, b, c, and rfare measures of the same construct, {. This equality should hold to within sampling error (Spearman & Holzinger, 1924), and at least four measures of a construct are needed for an assessment. A related equation is the product rule for internal consistency: Pab — Pa(P... |

58 |
The ML and PLS techniques for modeling with latent variables: Historical and comparative aspects.
- Joreskog, Wold
- 1982
(Show Context)
Citation Context ...mple size becomes large) and consistency at large (the number of indicators per latent variable becomes large; Joreskog & Wold, 1982). In practice, the correlations between the latent variables will tend to be underestimated, whereas the correlations of the observed measures with their respective latent variables will tend to be overestimated (Dijkstra, 1983). These two approaches to structural equation modeling, then, can be thought of as a complementary choice that depends on the purpose of the research: ML or GLS for theory testing and development and PLS for application and prediction. As Joreskog and Wold (1982) concluded, "ML is theory-oriented, and emphasizes the transition from exploratory to confirmatory analysis. PLS is primarily intended for causal-predictive analysis in situations of high complexity but low theoretical information" (p. 270). Drawing on this distinction, we consider, in the remainder of this article, a confirmatory two-step approach to theory testing and development using ML or GLS methods. Estimation Methods Since the inception of contemporary structural equation methodology in the middle 1960s (Bock & Bargmann, 1966; STRUCTURAL EQUATION MODELING IN PRACTICE 413 Joreskog, 1966... |

50 |
Some cautions concerning the application of causal modeling methods.
- Cliff
- 1983
(Show Context)
Citation Context ...cher may decide to accept M, over Mu on the basis of a practically insignificant Atll, even though the SCOT of Mt - Mn indicates a statistically significant difference between the two models. That is, from a practical standpoint, the more parsimonious M, provides adequate explanation. Finally, A,0 would indicate the overall percentage of observed-measure covariation explained by the structural and measurement submodels. Considerations in drawing causal inferences. Causal inferences made from structural equation models must be consistent with established principles of scientific inference (cf. Cliff, 1983). First, models are never confirmed by data; rather, they gain support by failing to be disconfirmed. Although a given model has acceptable goodness of fit, other models that would have equal fit may exist, particularly when relatively few paths relating the constructs to one another have been specified as absent. Second, temporal order is not an infallible guide to causal relations. An example that Cliff noted is that although a father's occupation preceded his child's performance on an intelligence test and the two are correlated, this does not mean that the father's occupation "caused" the ... |

47 |
On the Meaning of Within-Factor Correlated Measurement Errors,
- Gerbing, Anderson
- 1984
(Show Context)
Citation Context ... not, thereby obfuscating the meaning of the estimated underlying constructs. The use of correlated measurement errors can be justified only when they are specified a priori. As an example, correlated measurement errors may be expected in longitudinal research when the same indicators are measured at multiple points in time. By contrast, correlated measurement errors should not be used as respecifications because they take advantage of chance, at a cost of only a single degree of freedom, with a consequent loss of interpretability and theoretical meaningfulness (Bagozzi, 1983; Fornell, 1983). Gerbing and Anderson (1984) demonstrated how the uncritical use of correlated measurement errors for respecification, although improving goodness of fit, can mask a true underlying structure. In our experience, the patterning of the residuals has been the most useful for locating the source of misspecification in multiple-indicator measurement models. The LISREL program provides normalized residuals (Joreskog & Sorbom, 1984, p. 1.42), whereas the EQS program (Bentler, 1985, pp. 92-93) provides standardized residuals. Although Bentler and Dijkstra (1985) recently pointed out that the normalized residuals may not be stric... |

46 |
Some comments on maximum likelihood and partial least squares methods.
- Dijkstra
- 1983
(Show Context)
Citation Context ...approach also need to be mentioned. Neither an assumption of nor an assessment of unidimensional measurement (discussed in the next section) is made under a PLS approach. Therefore, the theoretical meaning imputed to the latent variables can be problematic. Furthermore, because it is a limited-information estimation method, PLS parameter estimates are not as efficient as full-information estimates (Fornell & Bookstein, 1982; Joreskog & Wold, 1982), and jackknife or bootstrap procedures (cf. Efron & Gong, 1983) are required to obtain estimates of the standard errors of the parameter estimates (Dijkstra, 1983). And no overall test of model fit is available. Finally, PLS estimates will be asymptotically correct only under the joint conditions of consistency (sample size becomes large) and consistency at large (the number of indicators per latent variable becomes large; Joreskog & Wold, 1982). In practice, the correlations between the latent variables will tend to be underestimated, whereas the correlations of the observed measures with their respective latent variables will tend to be overestimated (Dijkstra, 1983). These two approaches to structural equation modeling, then, can be thought of as a c... |

45 |
Efficient estimation via linearization in structural models.
- Bentler, Dijkstra
- 1985
(Show Context)
Citation Context ...tion 2 simplifies to a more computationally tractable expression, such as in Equation 1. By contrast, in ADF estimation, one must employ the full U matrix. For example, when there are only 20 observed variables, U has 22,155 unique elements (Browne, 1984). Thus, the computational requirements of ADF estimation can quickly surpass the capability of present computers as the number of observed variables becomes moderately large. To address this problem of computational infeasibility when the number of variables is moderately large, both EQS and LISREL 7 use approximations of the full ADF method. Bentler and Dijkstra (1985) developed what they called linearized estimators, which involve a single iteration beginning from appropriate initial estimates, such as those provided by normal theory ML. This linearized (L) estimation procedure is referred to as LAGLSin EQS. The approximation approach implemented in LISREL 7 (Joreskog & Sorbom, 1987) uses an option for ignoring the off-diagonal elements in U, providing what are called diagonally weighted least squares (DWLS) estimates. Bentler (1985) also implemented in the EQS program an estimation approach that assumes a somewhat more general underlying distribution than... |

45 |
Analyzing psychological data by structural analysis of covariance matrices. In
- Joreskog
- 1974
(Show Context)
Citation Context ...ent the confirmatory measurement model; discuss the need for unidimensional measurement; and then consider the areas of specification, assessment of fit, and respecification in turn. In the next section, after briefly reviewing the confirmatory structural model, we present a two-step modeling approach and, in doing so, discuss the comparative advantages of this twostep approach over a one-step approach. Background Exploratory Versus Confirmatory Analyses Although it is convenient to distinguish between exploratory and confirmatory research, in practice this distinction is not as clear-cut. As Joreskog (1974) noted, "Many investigations are to some extent both exploratory and confirmatory, since they involve some variables of known and other variables of unknown compc411 412 JAMES C. ANDERSON AND DAVID W. GERBING sition" (p. 2). Rather than as a strict dichotomy, then, the distinction in practice between exploratory and confirmatory analysis can be thought of as that of an ordered progression. Factor analysis can be used to illustrate this progression. An exploratory factor analysis in which there is no prior specification of the number of factors is exclusively exploratory. Using a maximum likeli... |

42 |
4 general model for multivariate analysis.
- Finn
- 1974
(Show Context)
Citation Context ...a separate factor and then fixing lambda as an identity matrix, theta delta as a null matrix, and phi as a diagonal matrix with freely estimated variances. Using the obtained chi-square value for this overall null model (xl>)> in conjunction with the chi-square value (xi) from the measurement model, one can calculate the normed fit index value as (xl>-Xm)/X0. 2 When a number of chi-square difference tests are performed for assessments of discriminant validity, the significance level for each test should be adjusted to maintain the "true" overall significance level for the family of tests (cf. Finn, 1974). This adjustment can be given as a0 = 1 - (1 - 01)', where a0 is the overall significance level, typically set at .05; a, is the significance level that should be used for each individual hypothesis test of discriminant validity; and / is the number of tests performed. STRUCTURAL EQUATION MODELING IN PRACTICE 417 of these causes is the likely one by examining the confidence interval constructed around the negative estimate. When positive values fall within this confidence interval and the size of the interval is comparable to that for proper estimates, the likely cause of the improper estimat... |

41 |
The likelihood ratio, Wald, and Lagrange multiplier tests: An expository note.
- Buse
- 1982
(Show Context)
Citation Context ...represents a given covariance matrix. However, their derivations were developed for a general discrepancy function, of which the fit function used in confirmatory analyses of covariance structures (cf. Browne, 1984; Joreskog, 1978) is a special case. Their results even extend to situations in which the null hypothesis need not be true. In such situations, the SCDTs will still be asymptotically independent but asymptotically distributed as noncentral chi-square variates. 4 A recent development in the EQS program (Bentler, 1986a) is the provision of Wald tests and Lagrange multiplier tests (cf. Buse, 1982), each of which is asymptotically equivalent to chi-square difference tests. This allows a researcher, within a single computer run, to obtain overall goodness-of-fit information that is asymptotically equivalent to what would be obtained from separate SCDT comparisons of Mc and M0 with the specified model, Mt. 420 JAMES C. ANDERSON AND DAVID W. GERBING M, - Ms Mr • M, MC • Ms I sign Mt - Mu Respecify Mu as alternate model, Mu'; then M, - Mu. a -Accept Mu ' Relax constraint in Mu that is "next-most-likely." model Mu, ; then Mu2 - MS " Figure 1. A decision-tree framework for the set of sequenti... |

41 |
Personality and mood by questionnaire.
- Cattell
- 1973
(Show Context)
Citation Context ...dimensional measures of this type have been referred to as congeneric measurements (Joreskog, 1971). By contrast, measurement models that contain correlated measurement errors or that have indicators that load on more than one estimated construct do not represent unidimensional construct measurement (Gerbing & Anderson, 1984). As a result, assignment of meaning to such estimated constructs can be problematic (cf. Bagozzi, 1983; Fomell, 1983; Gerbing & Anderson, 1984). Some dissent, however, exists about the application of the confirmatory factor analysis model for assessing unidimensionality. Cattell (1973, 1978) has argued that individual measures or items, like real-life behaviors, tend to be factorially complex. "In other words, to show that a given matrix is rank one is not to prove that the items are measuring a pure unitary trait factor in common: it may be a mixture of unitary traits" (Cattell, 1973, p. 382). According to Cattell (1973), although these items are unidimensional with respect to each other, they simply may represent a "bloated specific" in the context of the true (source trait) factor space. That is, the items represent a "psychological concept of something that is behavior... |

40 |
Some contributions to efficient statistics in structural models: Specification and estimation of moment structures”,
- Bentler
- 1983
(Show Context)
Citation Context ...ver a one-step approach. Considerations in specification, assessment of fit, and respecification of measurement models using confirmatory factor analysis are reviewed. As background to the two-step approach, the distinction between exploratory and confirmatory analysis, the distinction between complementary approaches for theory testing versus predictive application, and some developments in estimation methods also are discussed. Substantive use of structural equation modeling has been growing in psychology and the social sciences. One reason for this is that these confirmatory methods (e.g., Bentler, 1983; Browne, 1984; Joreskog, 1978)provide researchers withacomprehensive means for assessing and modifying theoretical models. As such, they offer great potential for furthering theory development. Because of their relative sophistication, however, a number of problems and pitfalls in their application can hinder this potential from being realized. The purpose of this article is to provide some guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development. We present a comprehensive, two-step modeling approach that provides a basis ... |

38 |
A prospectus for theory construction in marketing.
- Bagozzi
- 1984
(Show Context)
Citation Context ...s of freedom would exist for the SCDT; the theoretical "causal" model is indistinguishable from a confirmatory measurement model, and any causal interpretation should be carefully avoided. To the extent, however, that a "considerable" proportion of possible direct causal paths are specified as zero and there is acceptable fit, one can advance qualified causal interpretations. The SCDT comparison of Mc — Mt provides further understanding of the explanatory ability afforded by the theoretical model of interest and, irrespective of the outcome of the Mt - Ms comparison, would be considered next. Bagozzi (1984) recently noted the need to consider rival hypotheses in theory construction and stressed that whenever possible, these rival explanations should be tested within the same study. Apart from this but again stressing the need to assess alternative models, MacCallum (1986) concluded from his research on specification searches that "investigators should not interpret a nonsignificant chi-square as a signal to stop a specification search" (p. 118). SCDTs are particularly well-suited for accomplishing these comparisons between alternative theoretical models. Consider first the upper branch of the de... |

38 |
The estimation of factor loadings by the method of maximum likelihood
- Lawley
(Show Context)
Citation Context ... of the null hypothesis for overall model fit more often than would be expected. Conversely, when the underlying distribution is platykurtic (flat), the opposite result would be expected to occur (Browne, 1984). To address these potential problems, recent developments in estimation procedures, particularly by Bentler ( 1 983) and Browne ( 1 982, 1 984), have focused on relaxing the assumption of multivariate normality. In addition to providing more general estimation methods, these developments have led to a more unified approach to estimation. The traditional maximum likelihood fit function (Lawley, 1940), based on the likelihood ratio, is F(0) = ln |2(9)1 - ln|S |+ tr[SZ(«r'] ~ U) for p observed variables, with ap X p sample covariance matrix S, and p X p predicted covariance matrix 2(fl), where 0 is the vector of specified model parameters to be estimated. The specific maximum likelihood fit function in Equation 1 can be replaced by a more general fit function, which is implemented in the EQS program (Bentler, 1985) and in the LISREL program, beginning with Version 7 (Joreskog & Sorbom, 1 987): ) = [s - < (2) where s is a p* X 1 vector (such that p' = p(p + 1)12) of the nonduplicated element... |

37 |
The sensitivity of confirmatory maximum likelihood factor analysis to violations of measurement scale and distributional assumptions.
- Babakus, Ferguson, et al.
- 1987
(Show Context)
Citation Context ...not rescaled by known constants but by data-dependent values (i.e., standard deviations) that will randomly vary across samples. Because of this, when the observed variable covariances are expressed as correlations, the asymptotic standard errors and overall chi-square goodness-offit tests are not correct without adjustments to the estimation procedure (Bentler & Lee, 1983). A companion program to LISREL 7, PRELIS (Joreskog & Sorbom, 1987), can provide such adjustments. A second problem is the use of product-moment correlations when the observed variables cannot be regarded as continuous (cf. Babakus, Ferguson, & Joreskog, 1987). PRELIS also can account for this potential shortcoming of current usage by calculating the correct polychoric and polyserial coefficients (Muthen, 1984) and then adjusting the estimation procedure accordingly. In summary, these new estimation methods represent important theoretical advances. The degree, however, to which estimation methods that do not assume multivariate normality will supplant normal theory estimation methods in practice has yet to be determined. Many data sets may be adequately characterized by the multivariate normal, much as the univariate normal often adequately descri... |

35 |
Interpretational confounding of unobserved variables in structural equation models.
- Burt
- 1976
(Show Context)
Citation Context ...ment submodels are specified for x and y (cf. Joreskog & Sorbom, 1984, pp. 1.5-6), which then are simultaneously estimated with the structural submodel. In the presence of misspecification, the usual situation in practice, a one-step approach in which the measurement and structural submodels are estimated simultaneously will suffer from interpretation^ confounding (cf, Burt, 1973,1976). Interpretational confounding "occurs as the assignment of empirical meaning to an unobserved variable which is other than the meaning assigned to it by an individual a priori to estimating unknown parameters" (Burt, 1976, p. 4). Furthermore, this empirically denned meaning may change considerably, depending on the specification of free and constrained parameters for the structural submodel. Interpretational confounding is reflected by marked changes in the estimates of the pattern coefficients when alternate structural models are estimated. The potential for interpretational confounding is minimized by prior separate estimation of the measurement model because no constraints are placed on the structural parameters that relate the estimated constructs to one another. Given acceptable unidimensional measurement... |

31 |
Offending estimates in covariance structure analysis: Comments on the causes of and solutions to Heywood cases.
- Dillon, Kumar, et al.
- 1987
(Show Context)
Citation Context ...the measurement model that are more likely to occur with small sample sizes are nonconvergence and improper solutions. (We discuss potential causes of these problems within the Respecification subsection.) Solutions are nonconvergent when an estimation method's computational algorithm, within a set number of iterations, is unable to arrive at values that meet prescribed, termination criteria (cf. Joreskog, 1966, 1967). Solutions are improper when the values for one or more parameter estimates 416 JAMES C. ANDERSON AND DAVID W. GERBING are not feasible, such as negative variance estimates (cf. Dillon, Kumar, & Mulani, 1987; Gerbing & Anderson, 1987; van Driel, 1978). Anderson and Gerbing (1984) found that a sample size of 150 will usually be sufficient to obtain a converged and proper solution for models with three or more indicators per factor. Measurement models in which factors are denned by only two indicators per factor can be problematic, however, so larger samples may be needed to obtain a converged and proper solution. Unfortunately, a practical limitation of estimation methods that require information from higher order moments (e.g., ADF) is that they correspondingly require larger sample sizes. The is... |

31 |
The effects of sampling error and model characteristics on parameter estimation for maximum likelihood confirmatory factor analysis.
- Gerbing, Anderson
- 1985
(Show Context)
Citation Context ...onsider is to set 9, for the single indicator at the smallest value found for the other, estimated error variances (9(). Although this value is still arbitrary, it has the advantage of being based on information specific to the given research context. That is, this indicator shares a respondent sample and survey instrument with the other indicators. Sample size needed. Because full-information estimation methods depend on large-sample properties, a natural concern is the sample size needed to obtain meaningful parameter estimates. In a recent Monte Carlo study, Anderson and Gerbing (1984) and Gerbing and Anderson (1985) have investigated ML estimation for a number of sample sizes and a variety of confirmatory factor models in which the normal theory assumption was fully met. The results of this study were that although the bias in parameter estimates is of no practical significance for sample sizes as low as 50, for a given sample, the deviations of the parameter estimates from their respective population values can be quite large. Whereas this does not present a problem in statistical inference, because the standard errors computed by the LISREL program are adjusted accordingly, a sample size of 150 or more... |

30 |
Issues in the application of covariance structure analysis: A comment.
- Fornell
- 1983
(Show Context)
Citation Context ...last two ways do not, thereby obfuscating the meaning of the estimated underlying constructs. The use of correlated measurement errors can be justified only when they are specified a priori. As an example, correlated measurement errors may be expected in longitudinal research when the same indicators are measured at multiple points in time. By contrast, correlated measurement errors should not be used as respecifications because they take advantage of chance, at a cost of only a single degree of freedom, with a consequent loss of interpretability and theoretical meaningfulness (Bagozzi, 1983; Fornell, 1983). Gerbing and Anderson (1984) demonstrated how the uncritical use of correlated measurement errors for respecification, although improving goodness of fit, can mask a true underlying structure. In our experience, the patterning of the residuals has been the most useful for locating the source of misspecification in multiple-indicator measurement models. The LISREL program provides normalized residuals (Joreskog & Sorbom, 1984, p. 1.42), whereas the EQS program (Bentler, 1985, pp. 92-93) provides standardized residuals. Although Bentler and Dijkstra (1985) recently pointed out that the normaliz... |

27 | various causes of improper solutions of maximum likelihood factor analysis. - Driel - 1978 |

19 |
Analysis of covariance structures.
- Bock, Bargmann
- 1966
(Show Context)
Citation Context ...nd development and PLS for application and prediction. As Joreskog and Wold (1982) concluded, "ML is theory-oriented, and emphasizes the transition from exploratory to confirmatory analysis. PLS is primarily intended for causal-predictive analysis in situations of high complexity but low theoretical information" (p. 270). Drawing on this distinction, we consider, in the remainder of this article, a confirmatory two-step approach to theory testing and development using ML or GLS methods. Estimation Methods Since the inception of contemporary structural equation methodology in the middle 1960s (Bock & Bargmann, 1966; STRUCTURAL EQUATION MODELING IN PRACTICE 413 Joreskog, 1966, 1967), maximum likelihood has been the predominant estimation method. Under the assumption of a multivariate normal distribution of the observed variables, maximum likelihood estimators have the desirable asymptotic, or large-sample, properties of being unbiased, consistent, and efficient (Kmenta, 1971). Moreover, significance testing of the individual parameters is possible because estimates of the asymptotic standard errors of the parameter estimates can be obtained. Significance testing of overall model fit also is possible beca... |

17 |
Improper solutions in the analysis of covariance structures: Their interpretability and a comparison of alternate respecifications.
- Gerbing, Anderson
- 1987
(Show Context)
Citation Context ...more likely to occur with small sample sizes are nonconvergence and improper solutions. (We discuss potential causes of these problems within the Respecification subsection.) Solutions are nonconvergent when an estimation method's computational algorithm, within a set number of iterations, is unable to arrive at values that meet prescribed, termination criteria (cf. Joreskog, 1966, 1967). Solutions are improper when the values for one or more parameter estimates 416 JAMES C. ANDERSON AND DAVID W. GERBING are not feasible, such as negative variance estimates (cf. Dillon, Kumar, & Mulani, 1987; Gerbing & Anderson, 1987; van Driel, 1978). Anderson and Gerbing (1984) found that a sample size of 150 will usually be sufficient to obtain a converged and proper solution for models with three or more indicators per factor. Measurement models in which factors are denned by only two indicators per factor can be problematic, however, so larger samples may be needed to obtain a converged and proper solution. Unfortunately, a practical limitation of estimation methods that require information from higher order moments (e.g., ADF) is that they correspondingly require larger sample sizes. The issue is not simply that lar... |

13 |
Behavior of some elliptical theory estimators with nonnormal data in a covariance structures framework: A Monte Carlo study.
- Harlow
- 1985
(Show Context)
Citation Context ...upplant normal theory estimation methods in practice has yet to be determined. Many data sets may be adequately characterized by the multivariate normal, much as the univariate normal often adequately describes univariate distributions of data. And, as Bentler (1983) noted, referring to the weight matrix U, "an estimated optimal weight matrix should be adjusted to reflect the strongest assumptions about the variables that may be possible" (p. 504). Related to this, the limited number of existing Monte Carlo investigations of normal theory ML estimators applied to nonnormal data (Browne, 1984; Harlow, 1985; Tanaka, 1984) has provided support for the robustness of ML estimation for the recovery of parameter estimates, though their associated standard errors may be biased. Because assessments of the multivariate normality assumption now can be readily made by using the EQS and PRELIS programs, a researcher can make an informed choice on estimation methods in practice, weighing the trade-offs between the reasonableness of an underlying normal theory assumption and the limitations of arbitrary theory methods (e.g., constraints on model size and the need for larger sample sizes, which we discuss lat... |

12 |
Testing a simple structure hypothesis in factor analysis.
- Joreskog
- 1966
(Show Context)
Citation Context ...nd Wold (1982) concluded, "ML is theory-oriented, and emphasizes the transition from exploratory to confirmatory analysis. PLS is primarily intended for causal-predictive analysis in situations of high complexity but low theoretical information" (p. 270). Drawing on this distinction, we consider, in the remainder of this article, a confirmatory two-step approach to theory testing and development using ML or GLS methods. Estimation Methods Since the inception of contemporary structural equation methodology in the middle 1960s (Bock & Bargmann, 1966; STRUCTURAL EQUATION MODELING IN PRACTICE 413 Joreskog, 1966, 1967), maximum likelihood has been the predominant estimation method. Under the assumption of a multivariate normal distribution of the observed variables, maximum likelihood estimators have the desirable asymptotic, or large-sample, properties of being unbiased, consistent, and efficient (Kmenta, 1971). Moreover, significance testing of the individual parameters is possible because estimates of the asymptotic standard errors of the parameter estimates can be obtained. Significance testing of overall model fit also is possible because the fit function is asymptotically distributed as chisqua... |

11 |
The interdependence of theory, methodology, and empirical data: Causal modeling as an approach to construct validation. In
- Bentler
- 1978
(Show Context)
Citation Context ...ogg Graduate School of Management, Northwestern University, Evanston, Illinois 60208. proach, however, does not necessarily mean that it is the preferred way to accomplish the model-building task. In this article, we contend that there is much to gain in theory testing and the assessment of construct validity from separate estimation (and respecification) of the measurement model prior to the simultaneous estimation of the measurement and structural submodels. The measurement model in conjunction with the structural model enables a comprehensive, confirmatory assessment of construct validity (Bentler, 1978). The measurement model provides a confirmatory assessment of convergent validity and discriminant validity (Campbell & Fiske, 1959). Given acceptable convergent and discriminant validities, the test of the structural model then constitutes a confirmatory assessment of nomological validity (Campbell, 1960; Cronbach & Meehl, 1955). The organization of the article is as follows: As background to the two-step approach, we begin with a section in which we discuss the distinction between exploratory and confirmatory analysis, the distinction between complementary modeling approaches for theory test... |

11 |
Structural modeling and Psychometrika: An historical perspective on growth and achievements.
- Bentler
- 1986
(Show Context)
Citation Context ...or analysis, in which the question of interest is the number of factors that best represents a given covariance matrix. However, their derivations were developed for a general discrepancy function, of which the fit function used in confirmatory analyses of covariance structures (cf. Browne, 1984; Joreskog, 1978) is a special case. Their results even extend to situations in which the null hypothesis need not be true. In such situations, the SCDTs will still be asymptotically independent but asymptotically distributed as noncentral chi-square variates. 4 A recent development in the EQS program (Bentler, 1986a) is the provision of Wald tests and Lagrange multiplier tests (cf. Buse, 1982), each of which is asymptotically equivalent to chi-square difference tests. This allows a researcher, within a single computer run, to obtain overall goodness-of-fit information that is asymptotically equivalent to what would be obtained from separate SCDT comparisons of Mc and M0 with the specified model, Mt. 420 JAMES C. ANDERSON AND DAVID W. GERBING M, - Ms Mr • M, MC • Ms I sign Mt - Mu Respecify Mu as alternate model, Mu'; then M, - Mu. a -Accept Mu ' Relax constraint in Mu that is "next-most-likely." model M... |

11 |
The use of structural equation models in evaluation research.
- Sorbom, Joreskog
- 1982
(Show Context)
Citation Context ...ly estimate the construct (i.e., has no random measurement error or measure-specificity component). The question then becomes "At what values should the theta-delta and lambda parameters be set?" To answer this, ideally, a researcher would like to have an independent estimate for the error variance of the single indicator, perhaps drawn from prior research, but often this is not available. In the absence of an independent estimate, the choice of values becomes arbitrary. In the past, a conservative value for t,, such as. 1 sj, has been chosen, and its associated X has been set at .95s, (e.g., Sorbom & Joreskog, 1982). Another conservative alternative to consider is to set 9, for the single indicator at the smallest value found for the other, estimated error variances (9(). Although this value is still arbitrary, it has the advantage of being based on information specific to the given research context. That is, this indicator shares a respondent sample and survey instrument with the other indicators. Sample size needed. Because full-information estimation methods depend on large-sample properties, a natural concern is the sample size needed to obtain meaningful parameter estimates. In a recent Monte Carlo ... |

10 |
Testing for ellipsoidal symmetry of a multivariate density.
- Beran
- 1979
(Show Context)
Citation Context ...hat are called diagonally weighted least squares (DWLS) estimates. Bentler (1985) also implemented in the EQS program an estimation approach that assumes a somewhat more general underlying distribution than the multivariate normal assumed for ML estimation: elliptical estimation. The multivariate normal distribution assumes that each variable has zero skewness (third-order moments) and zero kurtosis (fourth-order moments). The multivariate elliptical distribution is a generalization of the multivariate normal in that the variables may share a common, nonzero kurtosis parameter (Bentler, 1983; Beran, 1979; Browne, 1984). As with the multivariate normal, iso-density contours are ellipsoids, but they may reflect more platykurtic or leptokurtic distributions, depending on the magnitude and direction of the kurtosis parameter. The elliptical distribution with regard to Equation 2 is a generalization of the multi414 JAMES C. ANDERSON AND DAVID W. GERBING variate normal and, thus, provides more flexibility in the types of data analyzed. Another advantage of this distribution is that the fourth-order moments can be expressed as a function of the second-order moments with only the addition of a single... |

9 |
Unidimensional measurement, second-order factor analysis, and causal models.
- Hunter, Gerbing
- 1982
(Show Context)
Citation Context ... measures, $ is a vector of n underlying factors such that n<q, A is a g X H matrix of pattern coefficients or factor loadings relating the observed measures to the underlying construct factors, and 5 is a vector of q variables that represents random measurement error and measure specificity. It is assumed for this model that E(£ S) = 0. The variancecovariance matrix for x, defined as 2, is 2 = (4) where * is the nXn covariance matrix off and Qs is the diagonal q X q covariance matrix of i. Need for Unidimensional Measurement Achieving unidimensional measurement (cf. Anderson & Gerbing, 1982; Hunter & Gerbing, 1982) is a crucial undertaking in theory testing and development. A necessary condition for assigning meaning to estimated constructs is that the measures that are posited as alternate indicators of each construct must be acceptably unidimensional. That is, each set of alternate indicators has only one underlying trait or construct in common (Hattie, 1985; McDonald, 1981). Two criteria, each representing necessary conditions, are used in assessing unidimensionality: internal consistency and external consistency. The internal consistency criterion can be presented in the following fundamental equati... |

8 |
Issues in the application of covariance structure analysis: A further comment
- Bagozzi
- 1983
(Show Context)
Citation Context ...is that with multiple-indicator measurement models, each estimated construct is denned by at least two measures, and each measure is intended as an estimate of only one construct. Unidimensional measures of this type have been referred to as congeneric measurements (Joreskog, 1971). By contrast, measurement models that contain correlated measurement errors or that have indicators that load on more than one estimated construct do not represent unidimensional construct measurement (Gerbing & Anderson, 1984). As a result, assignment of meaning to such estimated constructs can be problematic (cf. Bagozzi, 1983; Fomell, 1983; Gerbing & Anderson, 1984). Some dissent, however, exists about the application of the confirmatory factor analysis model for assessing unidimensionality. Cattell (1973, 1978) has argued that individual measures or items, like real-life behaviors, tend to be factorially complex. "In other words, to show that a given matrix is rank one is not to prove that the items are measuring a pure unitary trait factor in common: it may be a mixture of unitary traits" (Cattell, 1973, p. 382). According to Cattell (1973), although these items are unidimensional with respect to each other, the... |

8 |
General ability, its existence and nature.
- Hart, Spearman
- 1913
(Show Context)
Citation Context ...a crucial undertaking in theory testing and development. A necessary condition for assigning meaning to estimated constructs is that the measures that are posited as alternate indicators of each construct must be acceptably unidimensional. That is, each set of alternate indicators has only one underlying trait or construct in common (Hattie, 1985; McDonald, 1981). Two criteria, each representing necessary conditions, are used in assessing unidimensionality: internal consistency and external consistency. The internal consistency criterion can be presented in the following fundamental equation (Hart & Spearman, 1913, p. 58; Spearman, 1914, p. 107): Pac _ Pbc Pad Pbd (5) where a, b, c, and rfare measures of the same construct, {. This equality should hold to within sampling error (Spearman & Holzinger, 1924), and at least four measures of a construct are needed for an assessment. A related equation is the product rule for internal consistency: Pab — Pa(Pb(, (6) where a and b are measures of some construct, |. The external consistency criterion can be given by a redefinition of Equation 3, where (a) a, b and c are alternate indicators of a given construct and rf is redefined as an indicator of another cons... |

8 |
Some results on the estimation of covariance structure models. Dissertation Abstracts International,
- Tanaka
- 1984
(Show Context)
Citation Context ...tic, or large-sample, properties of being unbiased, consistent, and efficient (Kmenta, 1971). Moreover, significance testing of the individual parameters is possible because estimates of the asymptotic standard errors of the parameter estimates can be obtained. Significance testing of overall model fit also is possible because the fit function is asymptotically distributed as chisquare, adjusted by a constant multiplier. Although maximum likelihood parameter estimates in at least moderately sized samples appear to be robust against a moderate violation of multivariate normality (Browne, 1984; Tanaka, 1984), the problem is that the asymptotic standard errors and overall chi-square test statistic appear not to be. Related to this, using normal theory estimation methods when the data have an underlying leptokurtic (peaked) distribution appears to lead to rejection of the null hypothesis for overall model fit more often than would be expected. Conversely, when the underlying distribution is platykurtic (flat), the opposite result would be expected to occur (Browne, 1984). To address these potential problems, recent developments in estimation procedures, particularly by Bentler ( 1 983) and Browne (... |

7 | Covariance structures under polynomial constraints: Applications to correlation and alpha-type structural models.
- Bentler, Lee
- 1983
(Show Context)
Citation Context ...nd-order moment structures. In addition to the relaxation of multivariate normality, recent developments in estimation procedures have addressed at least two other issues. One problem is that when the data are standardized, the covariances are not rescaled by known constants but by data-dependent values (i.e., standard deviations) that will randomly vary across samples. Because of this, when the observed variable covariances are expressed as correlations, the asymptotic standard errors and overall chi-square goodness-offit tests are not correct without adjustments to the estimation procedure (Bentler & Lee, 1983). A companion program to LISREL 7, PRELIS (Joreskog & Sorbom, 1987), can provide such adjustments. A second problem is the use of product-moment correlations when the observed variables cannot be regarded as continuous (cf. Babakus, Ferguson, & Joreskog, 1987). PRELIS also can account for this potential shortcoming of current usage by calculating the correct polychoric and polyserial coefficients (Muthen, 1984) and then adjusting the estimation procedure accordingly. In summary, these new estimation methods represent important theoretical advances. The degree, however, to which estimation meth... |

7 |
Determinacy of common factors: A nontechnical review.
- McDonald, Mulaik
- 1979
(Show Context)
Citation Context .... Because of this assumption, the amount of variance explained in the set of observed measures is not of primary concern. Reflecting this, full-information methods provide parameter estimates that best explain the observed covariances. Two further relative strengths of full-information approaches are that they provide the most efficient parameter estimates (Joreskog & Wold, 1982) and an overall test of model fit. Because of the underlying assumption of random error and measure specificity, however, there is inherent indeterminacy in the estimation of factor scores (cf. Lawley & Maxwell, 1971; McDonald & Mulaik, 1979;Steiger, 1979). This is not a concern in theory testing, whereas in predictive applications this will likely result in some loss of predictive accuracy. For application and prediction, a PLS approach has relative strength. Under this approach, one can assume that all observed measure variance is useful variance to be explained. That is, under a principal-component model, no random error variance or measure-specific variance (i.e., unique variance) is assumed. Parameters are estimated so as to maximize the variance explained in either the set of observed measures (reflective mode) or the set o... |

7 |
Theory of two factors.
- Spearman
- 1914
(Show Context)
Citation Context ...y testing and development. A necessary condition for assigning meaning to estimated constructs is that the measures that are posited as alternate indicators of each construct must be acceptably unidimensional. That is, each set of alternate indicators has only one underlying trait or construct in common (Hattie, 1985; McDonald, 1981). Two criteria, each representing necessary conditions, are used in assessing unidimensionality: internal consistency and external consistency. The internal consistency criterion can be presented in the following fundamental equation (Hart & Spearman, 1913, p. 58; Spearman, 1914, p. 107): Pac _ Pbc Pad Pbd (5) where a, b, c, and rfare measures of the same construct, {. This equality should hold to within sampling error (Spearman & Holzinger, 1924), and at least four measures of a construct are needed for an assessment. A related equation is the product rule for internal consistency: Pab — Pa(Pb(, (6) where a and b are measures of some construct, |. The external consistency criterion can be given by a redefinition of Equation 3, where (a) a, b and c are alternate indicators of a given construct and rf is redefined as an indicator of another construct or (b) both c and... |

6 |
Multistructural statistical models applied to factor analysis.
- Bentler
- 1976
(Show Context)
Citation Context ...his confidence interval and the size of the interval is comparable to that for proper estimates, the likely cause of the improper estimate is sampling error. Building on this work, Oerbing and Anderson (1987) recently found that for improper estimates due to sampling error, respecifying the model with the problematic parameter fixed at zero has no appreciable effect on the parameter estimates of other factors or on the overall goodness-of-fit indices. Alternately, this parameter can be fixed at some arbitrarily small, positive number (e.g., .005) to preserve the confirmatory factor model (cf. Bentler, 1976). Given a converged and proper solution but unacceptable overall fit, there are four basic ways to respecify indicators that have not "worked out as planned": Relate the indicator to a different factor, delete the indicator from the model, relate the indicator to multiple factors, or use correlated measurement errors. The first two ways preserve the potential to have unidimensional measurement and are preferred because of this, whereas the last two ways do not, thereby obfuscating the meaning of the estimated underlying constructs. The use of correlated measurement errors can be justified only... |

6 |
Methods of reordering the correlation matrix to facilitate visual inspection and preliminary cluster analysis.
- Hunter
- 1973
(Show Context)
Citation Context ...rfitting), and when another factor on which it should belong exists, an obverse pattern of large positive residuals will be observed with the indicators of this factor (representing underfitting). As another example, indicators that are multidimensional tend to have large normalized residuals (the result of either underfitting or overfilling) wilh indicators of more lhan one factor, which often represents the only large normalized residual for each of these other indicators. Useful adjuncts to the pattern of residuals are similarity (or proportionality) coefficients (Anderson & Gerbing, 1982; Hunter, 1973) and multiple-groups analysis (cf. Anderson & Gerbing, 1982; Nunnally, 1978), each of which can readily be computed wilh Ihe ITAN program (Gerbing & Hunler, 1987). A similarity coefficient, u,j, for any two indicators, x, and Xj, can be defined for a set of q indicators as 4 2 i1'2 (8) The value of this index ranges from -1.0 to +1.0, with values greater in magnilude indicating greater internal and external consistency for Ihe Iwo indicators. Thus, similarity coefficients are useful because they efficiently summarize the internal and external consistency of the indicators with one another. Alt... |

5 |
Confirmatory factor-analytic structures and the theory construction process.
- Burt
- 1973
(Show Context)
Citation Context ... simultaneous estimation of the measurement and struclural submodels. In putting forth a specific two-step approach, we use the concepls of nested models, pseudo chi-square tests, and sequential chi-square difference tests (SCDTs) and draw on some recent work from quantitative 418 JAMES C. ANDERSON AND DAVID W. GERBING psychology (Steiger, Shapiro, & Browne, 1985). These tests enable a separate assessment of the adequacy of the substantive model of interest, apart from that of the measurement model. We first present the structural model and discuss the concept of interpretational confounding (Burt, 1973,1976). A confirmatory structural model that specifies the posited causal relations of the estimated constructs to one another can be given directly from Joreskog and Sorbom (1984, p. 1.5). This model can be expressed as (9) where i? is a vector of m endogenous constructs, {is a vector of n exogenous constructs, B is an m X m matrix of coefficients representing the effects of the endogenous constructs on one another, r is an m X n matrix of coefficients representing the effects of the exogenous constructs on the endogenous constructs, and f is a vector of m residuals (errors in equations and r... |

5 |
The weight matrix in asymptotic distribution-free methods.
- Mooijaart, Bentler
- 1985
(Show Context)
Citation Context ...soids, but they may reflect more platykurtic or leptokurtic distributions, depending on the magnitude and direction of the kurtosis parameter. The elliptical distribution with regard to Equation 2 is a generalization of the multi414 JAMES C. ANDERSON AND DAVID W. GERBING variate normal and, thus, provides more flexibility in the types of data analyzed. Another advantage of this distribution is that the fourth-order moments can be expressed as a function of the second-order moments with only the addition of a single kurtosis parameter, greatly simplifying the structure of U. Bentler (1983) and Mooijaart and Bentler (1985) have outlined an estimation procedure even more ambitious than any of those presently implemented in EQS or LISREL 7. This procedure, called asymptotically distribution-free reweighted least squares (ARLS), generalizes on Browne's (1984) ADF method. In an ADF method (or AGLS in EQS notation), U is denned as a constant before the minimization of Equation 2 begins. By contrast, in ARLS, U is updated at each iteration of the minimization algorithm. This updating is based on Bentler's (1983) expression of higher order moment structures, specified as a function of the current estimates of the mode... |

5 |
A second-order longitudinal model of ability structure.
- Weeks
- 1980
(Show Context)
Citation Context ...Anderson, 1984). The measurement approach that we have advocated is not, however, necessarily inconsistent with Cattell's (1973, 1978) approach. The two approaches can become compatible when the level of analysis shifts from the individual items to a corresponding set of composites denned by these items. Further analyses of these composites could then be undertaken to isolate the constructs of interest, which would be conceptualized as higher order factors (Gerbing & Anderson, 1984). One possibility is a second-order confirmatory factor analysis as outlined by, for example, Joreskog (1971) or Weeks (1980). Another possibility is to interpret the resulting composites within an existing "reference factor system," such as the 16 personality dimensions provided by Cattell (1973) for the personality domain. Setting the metric of the factors. For identification of the measurement model, one must set the metric (variances) of the factors. A preferred way of doing this is to fix the diagonal of the phi matrix at 1.0, giving all factors unit variances, rather than to arbitrarily fix the pattern coefficient for one indicator of each factor at 1.0 (Gerbing & Hunter, 1982). Setting the metric in this way ... |

4 |
Two Structural Equation Models:
- Fornell, Bookstein
- 1982
(Show Context)
Citation Context ...lidate the final model on another sample drawn from the population to which the results are to be generalized. This cross-validation would be accomplished by specifying the same model with freely estimated parameters or, in what represents the quintessential confirmatory analysis, the same model with the parameter estimates constrained to the previously estimated values. Complementary Approaches for Theory Testing Versus Predictive Application A fundamental distinction can be made between the use of structural equation modeling for theory testing and development versus predictive application (Fornell & Bookstein, 1982; Joreskog & Wold, 1982). This distinction and its implications concern a basic choice of estimation method and underlying model. For clarity, we can characterize this choice as one between a full-information (ML or GLS) estimation approach (e.g., Bentler, 1983; Joreskog, 1978) in conjunction with the common factor model (Harman, 1976) and a partial least squares (PLS) estimation approach (e.g., Wold, 1982) in conjunction with the principal-component model (Harman, 1976). For theory testing and development, the ML or GLS approach has several relative strengths. Under the common factor model, o... |

3 |
The function of theory in a dilemma of path analysis.
- Young
- 1977
(Show Context)
Citation Context ...e confidence interval (±two standard errors) around the correlation estimate between the two factors includes 1.0. Respecification Because the emphasis of this article is on structural equation modeling in practice, we recognize that most often some respecification of the measurement model will be required. It must be stressed, however, that respecification decisions should not be based on statistical considerations alone but rather in conjunction with theory and content considerations. Consideration of theory and content both greatly reduces the number of alternate models to investigate (cf. Young, 1977) and reduces the possibility of taking advantage of sampling error to attain goodness of fit. Sometimes, the first respecification necessary is in response to nonconvergence or an improper solution. Nonconvergence can occur because of a fundamentally incongruent pattern of sample covariances that is caused either by sampling error in conjunction with a properly specified model or by a misspecification. Relying on content, one can obtain convergence for the model by respecifying one or more problematic indicators to different constructs or by excluding them from further analysis. Considering im... |

2 |
Theory and implementation ofEQS:A structural equations program. Los Angeles:
- Bentler
- 1985
(Show Context)
Citation Context ...ns, as well as avoiding some specious inferences. The model-building task can be thought of as the analysis of two conceptually distinct models (Anderson & Gerbing, 1982; Joreskog & Sorbom, 1984). A confirmatory measurement, or factor analysis, model specifies the relations of the observed measures to their posited underlying constructs, with the constructs allowed to intercorrelate freely. A confirmatory structural model then specifies the causal relations of the constructs to one another, as posited by some theory. With full-information estimation methods, such as those provided in the EQS (Bentler, 1985) or LISREL (Joreskog & Sorbom, 1984) programs, the measurement and structural submodels can be estimated simultaneously. The ability to do this in a one-step analysis apThis work was supported in part by the McManus Research Professorship awarded to James C. Anderson. We gratefully acknowledge the comments and suggestions of Jeanne Brett, Claes Fornell, David Larcker, William Perreault, Jr., and James Steiger. Correspondence concerning this article should be addressed to James C. Anderson, Department of Marketing, J. L. Kellogg Graduate School of Management, Northwestern University, Evanston, ... |

2 |
Lagrange multiplier and Wald tests for EQS and EQS/PC. Los Angeles:
- Bentler
- 1986
(Show Context)
Citation Context ...or analysis, in which the question of interest is the number of factors that best represents a given covariance matrix. However, their derivations were developed for a general discrepancy function, of which the fit function used in confirmatory analyses of covariance structures (cf. Browne, 1984; Joreskog, 1978) is a special case. Their results even extend to situations in which the null hypothesis need not be true. In such situations, the SCDTs will still be asymptotically independent but asymptotically distributed as noncentral chi-square variates. 4 A recent development in the EQS program (Bentler, 1986a) is the provision of Wald tests and Lagrange multiplier tests (cf. Buse, 1982), each of which is asymptotically equivalent to chi-square difference tests. This allows a researcher, within a single computer run, to obtain overall goodness-of-fit information that is asymptotically equivalent to what would be obtained from separate SCDT comparisons of Mc and M0 with the specified model, Mt. 420 JAMES C. ANDERSON AND DAVID W. GERBING M, - Ms Mr • M, MC • Ms I sign Mt - Mu Respecify Mu as alternate model, Mu'; then M, - Mu. a -Accept Mu ' Relax constraint in Mu that is "next-most-likely." model M... |

2 | ITAN: A statistical package for ITem ANalysis including multiple groups confirmatory factor analysis. - Gerbing, Hunter - 1987 |

2 |
The sampling error in the theory of two factors.
- Spearman, Holzinger
- 1924
(Show Context)
Citation Context ... each construct must be acceptably unidimensional. That is, each set of alternate indicators has only one underlying trait or construct in common (Hattie, 1985; McDonald, 1981). Two criteria, each representing necessary conditions, are used in assessing unidimensionality: internal consistency and external consistency. The internal consistency criterion can be presented in the following fundamental equation (Hart & Spearman, 1913, p. 58; Spearman, 1914, p. 107): Pac _ Pbc Pad Pbd (5) where a, b, c, and rfare measures of the same construct, {. This equality should hold to within sampling error (Spearman & Holzinger, 1924), and at least four measures of a construct are needed for an assessment. A related equation is the product rule for internal consistency: Pab — Pa(Pb(, (6) where a and b are measures of some construct, |. The external consistency criterion can be given by a redefinition of Equation 3, where (a) a, b and c are alternate indicators of a given construct and rf is redefined as an indicator of another construct or (b) both c and d are redefined as alternate indicators of another construct. A related equation is the product rule for external consistency: STRUCTURAL EQUATION MODELING IN PRACTICE 415... |

1 |
The metric of the latent variables in the LISREL-IV analysis.
- Gerbing, Hunter
- 1982
(Show Context)
Citation Context ...utlined by, for example, Joreskog (1971) or Weeks (1980). Another possibility is to interpret the resulting composites within an existing "reference factor system," such as the 16 personality dimensions provided by Cattell (1973) for the personality domain. Setting the metric of the factors. For identification of the measurement model, one must set the metric (variances) of the factors. A preferred way of doing this is to fix the diagonal of the phi matrix at 1.0, giving all factors unit variances, rather than to arbitrarily fix the pattern coefficient for one indicator of each factor at 1.0 (Gerbing & Hunter, 1982). Setting the metric in this way allows a researcher to test the significance of each pattern coefficient, which is of interest, rather than to forgo this and test whether the factor variances are significantly different from zero, which typically is not of interest. Single indicators. Although having multiple indicators for each construct is strongly advocated, sometimes in practice only a single indicator of some construct is available. And, as most often is the case, this indicator seems unlikely to perfectly estimate the construct (i.e., has no random measurement error or measure-specifici... |

1 | Factor indeterminacy in the 1930's and the 1970's: Some interesting parallels. - Steigen - 1979 |