Results 1  10
of
21
Application of covariance structure modeling in psychology: cause for concern? Psychol
 Bull
, 1990
"... Methods of covariance structure modeling are frequently applied in psychological research. These methods merge the logic of confirmatory factor analysis, multiple regression, and path analysis within a single data analytic framework. Among the many applications are estimation of disattenuated corre ..."
Abstract

Cited by 69 (0 self)
 Add to MetaCart
(Show Context)
Methods of covariance structure modeling are frequently applied in psychological research. These methods merge the logic of confirmatory factor analysis, multiple regression, and path analysis within a single data analytic framework. Among the many applications are estimation of disattenuated correlation and regression coefficients, evaluation of multitraitmultimethod matrices, and assessment of hypothesized causal structures. Shortcomings of these methods are commonly acknowledged in the mathematical literature and in textbooks. Nevertheless, serious flaws remain in many published applications. For example, it is rarely noted that the fit of a favored model is identical for a potentially large number of equivalent models. A review of the personality and social psychology literature illustrates the nature of this and other problems in reported applications of covariance structure models. A principal goal of experimentation in psychology is to provide a basis for inferring causation. Among the tools used to achieve this goal are the active manipulation and control of independent variables, random assignment to experimental treatments, and appropriate methods of data analysis. Causal infer
Bayesian Estimation and Testing of Structural Equation Models
 Psychometrika
, 1999
"... The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameter ..."
Abstract

Cited by 43 (10 self)
 Add to MetaCart
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, e.g., output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters of underidentified models, as we illustrate on a simple errorsinvariables model.
Can We Ever Escape From Data Overload? A Cognitive Systems Diagnosis
 Cognition, Technology and Work
, 2002
"... gence in circumscribed, cooperative roles to aid human observers in organizing, selecting, managing, and interpreting data. CHARACTERIZATIONS OF DATA OVERLOAD Data overload is the problem of our age  generic yet surprisingly resistant to different avenues of attack. In order to make progress on in ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
(Show Context)
gence in circumscribed, cooperative roles to aid human observers in organizing, selecting, managing, and interpreting data. CHARACTERIZATIONS OF DATA OVERLOAD Data overload is the problem of our age  generic yet surprisingly resistant to different avenues of attack. In order to make progress on innovating solutions to data overload in a particular setting, we need to identify the root issues that make data overload a challenging problem everywhere and to understand why proposed solutions have broken down or produced limited success in operational settings. There are three basic ways that the data overload problem has been characterized (Woods, Patterson, and Roth, 1998): 1. As a clutter problem where there is too much data: therefore, we can solve data overload by reducing the number of data units that are displayed. This has not proven to be a fruitful direction in solving data overload because it misrepresents the design problem, is based on erroneous assumptions a
What Is a Theory of Mental Representation
 Mind
, 1992
"... you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact inform ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
Latent variables, causal models and overidentifying constraints
 Journal of Econometrics
, 1988
"... When is a statistical dependency between two variables best explained by the supposition that one of these variables causes the other, as opposed to the supposition that there is a (possibly unmeasured) common cause acting on both variables? In this paper, we describe an approach towards model speci ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
When is a statistical dependency between two variables best explained by the supposition that one of these variables causes the other, as opposed to the supposition that there is a (possibly unmeasured) common cause acting on both variables? In this paper, we describe an approach towards model specification developed more fully in our book Discovering Cuud Structure, and illustrate its application to the aforementioned question. Briefly, the approach is to determine constraints satisfied by the variancecovariance matrix of a sample, and then to conduct a quasiautomated search for the causal specifications that will best explain those constraints, 1.
Inductive Process Modeling
"... Abstract. In this paper, we pose a novel research problem for machine learning that involves constructing a process model from continuous data. We claim that casting learned knowledge in terms of processes with associated equations is desirable for scientific and engineering domains, where such nota ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we pose a novel research problem for machine learning that involves constructing a process model from continuous data. We claim that casting learned knowledge in terms of processes with associated equations is desirable for scientific and engineering domains, where such notations are commonly used. We also argue that existing induction methods are not well suited to this task, although some techniques hold partial solutions. In response, we describe an approach to learning process models from timeseries data and illustrate its behavior in three domains. In closing, we describe open issues in process model induction and encourage other researchers to tackle this important problem.
Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding evaluation in American schools of education
 In R. Boruch & F. Mosterller (Eds.), Education
, 2001
"... Paper presented at the Harvard Faculty Seminar on Experiments in Education. ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Paper presented at the Harvard Faculty Seminar on Experiments in Education.
Bayesian Informal Logic and Fallacy
, 2002
"... Bayesian reasoning has been applied formally to statistical inference, machine learning and analyzing scientific method. Here I apply it informally to more common forms of inference, namely natural language arguments. I analyze a variety of traditional fallacies, deductive, inductive and causal, a ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Bayesian reasoning has been applied formally to statistical inference, machine learning and analyzing scientific method. Here I apply it informally to more common forms of inference, namely natural language arguments. I analyze a variety of traditional fallacies, deductive, inductive and causal, and find more merit in them than is generally acknowledged. Bayesian principles provide a framework for understanding ordinary arguments which is well worth developing.
A Study of Causal Discovery With Weak Links and Small Samples
, 1997
"... Weak causal relationships and small sample size pose two significant difficulties to the automatic discovery of causal models from observational data. This paper examines the influence of weak causal links and varying sample sizes on the discovery of causal models. The experimental results illustrat ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Weak causal relationships and small sample size pose two significant difficulties to the automatic discovery of causal models from observational data. This paper examines the influence of weak causal links and varying sample sizes on the discovery of causal models. The experimental results illustrate the effect of larger sample sizes for discovering causal models reliably and the relevance of the strength of causal links and the complexity of the original causal model. We present indicative evidence of the superior robustness of MML (Minimum Message Length) methods to standard significance tests in the recovery of causal links. The comparative results show that the MMLCI (the MML Causal Inducer) causal discovery system finds better models than TETRAD II given small samples from linear causal models. The experimental results also reveal that MMLCI finds weak links with smaller sample sizes than can TETRAD II.