Results 1 - 10
of
84
Implicit measures: A normative analysis and review
- Psychological Bulletin
, 2009
"... Implicit measures can be defined as outcomes of measurement procedures that are caused in an automatic manner by psychological attributes. To establish that a measurement outcome is an implicit measure, one should examine (a) whether the outcome is causally produced by the psychological attribute it ..."
Abstract
-
Cited by 64 (6 self)
- Add to MetaCart
Implicit measures can be defined as outcomes of measurement procedures that are caused in an automatic manner by psychological attributes. To establish that a measurement outcome is an implicit measure, one should examine (a) whether the outcome is causally produced by the psychological attribute it was designed to measure, (b) the nature of the processes by which the attribute causes the outcome, and (c) whether these processes operate automatically. This normative analysis provides a heuristic framework for organizing past and future research on implicit measures. The authors illustrate the heuristic function of their framework by using it to review past research on the 2 implicit measures that are currently most popular: effects in implicit association tests and affective priming tasks.
THE ATTACK OF THE PSYCHOMETRICIANS
, 2006
"... This paper analyzes the theoretical, pragmatic, and substantive factors that have hampered the integration between psychology and psychometrics. Theoretical factors include the operationalist mode of thinking which is common throughout psychology, the dominance of classical test theory, and the use ..."
Abstract
-
Cited by 37 (1 self)
- Add to MetaCart
This paper analyzes the theoretical, pragmatic, and substantive factors that have hampered the integration between psychology and psychometrics. Theoretical factors include the operationalist mode of thinking which is common throughout psychology, the dominance of classical test theory, and the use of “construct validity ” as a catch-all category for a range of challenging psychometric problems. Pragmatic factors include the lack of interest in mathematically precise thinking in psychology, inadequate representation of psychometric modeling in major statistics programs, and insufficient mathematical training in the psychological curriculum. Substantive factors relate to the absence of psychological theories that are sufficiently strong to motivate the structure of psychometric models. Following the identification of these problems, a number of promising recent developments are discussed, and suggestions are made to further the integration of psychology and psychometrics.
A suggested change in terminology and emphasis regarding validity and education
- Educational Researcher
, 2007
"... This article raises a number of questions about the current unified the-ory of test validity that has construct validity at its center. The authors suggest a different way of conceptualizing the problem of establishing validity by considering whether the focus of the investigation of a test is inter ..."
Abstract
-
Cited by 20 (0 self)
- Add to MetaCart
This article raises a number of questions about the current unified the-ory of test validity that has construct validity at its center. The authors suggest a different way of conceptualizing the problem of establishing validity by considering whether the focus of the investigation of a test is internal to the test itself or focuses on constructs and relationships that are external to the test. They also consider whether the perspec-tive on the test examination is theoretical or practical. The resulting taxonomy, encompassing both investigative focus and perspective, serves to organize a reconceptualization of the field of validity studies. The authors argue that this approach, together with changes in the rest of the terminology regarding validity, leads to a more understandable and usable model.
A dynamical model of general intelligence: the positive manifold of intelligence by mutualism. Psychological Review
, 2006
"... Scores on cognitive tasks used in intelligence tests correlate positively with each other, that is, they display a positive manifold of correlations. The positive manifold is often explained by positing a dominant latent variable, the g factor, associated with a single quantitative cognitive or biol ..."
Abstract
-
Cited by 18 (0 self)
- Add to MetaCart
Scores on cognitive tasks used in intelligence tests correlate positively with each other, that is, they display a positive manifold of correlations. The positive manifold is often explained by positing a dominant latent variable, the g factor, associated with a single quantitative cognitive or biological process or capacity. In this article, a new explanation of the positive manifold based on a dynamical model is proposed, in which reciprocal causation or mutualism plays a central role. It is shown that the positive manifold emerges purely by positive beneficial interactions between cognitive processes during development. A single underlying g factor plays no role in the model. The model offers explanations of important findings in intelligence research, such as the hierarchical factor structure of intelligence, the low predictability of intelligence from early childhood performance, the integration/differentiation effect, the increase in heritability of g, and the Jensen effect, and is consistent with current explanations of the Flynn effect.
The fallacy of formative measurement
- Organizational Research Methods
, 2011
"... Inmanagement research, there is a growing trend toward formativemeasurement, in whichmeasures are treated as causes of constructs. Formative measurement can be contrasted with reflective measurement, in which constructs are specified as causes of measures. Although recent work seems to suggest that ..."
Abstract
-
Cited by 17 (0 self)
- Add to MetaCart
Inmanagement research, there is a growing trend toward formativemeasurement, in whichmeasures are treated as causes of constructs. Formative measurement can be contrasted with reflective measurement, in which constructs are specified as causes of measures. Although recent work seems to suggest that formative measurement is a viable alternative to reflective measurement, the emerging enthusiasm for formative measurement is based on conceptions of constructs, measures, and causality that are difficult to defend. This article critically compares reflective and formative measurement on the basis of dimensionality, internal consistency, identification, measurement error, construct validity, and causality. This comparison leads to the conclusion that the presumed viability of formative measurement is a fallacy, and the objectives of formative measurement can be achieved using alternative models with reflective measures.
2008).The effects of Q-matrix misspecification on parameter estimates and classification accuracy
- in the DINA model. Educational and Psychological Measurement
"... This article reports a study that investigated the effects of Q-matrix misspecifications on parameter estimates and misclassification rates for the deterministic-input, noisy ‘‘and’’ gate (DINA) model, which is a restricted latent class model for multiple classifications of respondents that can be u ..."
Abstract
-
Cited by 15 (0 self)
- Add to MetaCart
This article reports a study that investigated the effects of Q-matrix misspecifications on parameter estimates and misclassification rates for the deterministic-input, noisy ‘‘and’’ gate (DINA) model, which is a restricted latent class model for multiple classifications of respondents that can be useful for cognitively motivated diagnostic assessment. In this study, a Q-matrix for an assessment mapping all 15 possible attribute patterns based on four independent attributes was misspecified by changing one ‘‘0’ ’ or ‘‘1’ ’ for each item. This was done in a way that ensured that certain attribute combinations were com-pletely deleted from the Q-matrix, and certain incorrect dependency relationships between attributes were represented. Results showed clear effects that included an item-specific overestimation of slipping parameters when attributes were deleted from the Q-matrix, an item-specific overestimation of guessing parameters when attributes were added to the Q-matrix, and high misclassification rates for attribute classes that con-tained attribute combinations that were deleted from the Q-matrix.
Psychometric Perspectives on Diagnostic Systems.”
- Journal of Clinical Psychology,
, 2008
"... ..."
New perspectives on the system usage construct. Doctoral dissertation
, 2005
"... In presenting this dissertation as a partial fulfillment of the requirements for an advanced degree ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
In presenting this dissertation as a partial fulfillment of the requirements for an advanced degree
Validity evidence based on response processes
"... The study of the response processes to test items and questionnaires was fi rst considered explicitly as a source of validity evidence in the Standards for Educational and Psychological ..."
Abstract
-
Cited by 9 (1 self)
- Add to MetaCart
The study of the response processes to test items and questionnaires was fi rst considered explicitly as a source of validity evidence in the Standards for Educational and Psychological
Evaluation of Convergent and Discriminant Validity with Multitrait-Multimethod
"... on validity estimation, as well as to L. K. Muthén and B. O. Muthén for instructive comments on parameter constraint evaluation. I am grateful to J. Miles and two anonymous Referees for insightful and suggestive criticism on an earlier version of the paper, which contributed substantially to its imp ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
on validity estimation, as well as to L. K. Muthén and B. O. Muthén for instructive comments on parameter constraint evaluation. I am grateful to J. Miles and two anonymous Referees for insightful and suggestive criticism on an earlier version of the paper, which contributed substantially to its improvement. Correspondence on this manuscript may be addressed to Tenko Raykov, Michigan State University, Measurement