Results 1 
5 of
5
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
The TETRAD Project: Constraint Based Aids to Causal Model Specification
 MULTIVARIATE BEHAVIORAL RESEARCH
"... ..."
Generalized measurement models
, 2004
"... Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and so ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and sometimes weaker than those used in practice by, for instance, econometricians, psychometricians, social scientists and in many other fields where latent variable models are important and tools such as factor analysis are applicable. The goal is automated knowledge discovery: identifying latent variables that can be used across diferent applications and causal models and throw new insights over a data generating process. Our approach is evaluated throught simulations and three realworld cases.
Generalization of the Tetrad Representation Theorem
 Preliminary Papers of the Fifth International Workshop on Artificial Intelligence and
, 1993
"... The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and sufficient for the vanishing of tetrad differences in a linear correlation structure. This note simplifies their proof and generalizes the theorem. This generalization can stren ..."
Abstract
 Add to MetaCart
The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and sufficient for the vanishing of tetrad differences in a linear correlation structure. This note simplifies their proof and generalizes the theorem. This generalization can strengthen procedures used to search for structural equation models for large data sets.  1  1 Introduction In a linear "structural equation" model, it is assumed that there is a set of variables V , and for each variable X i in V , there is a unique associated error term E i with nonzero variance. For each variable X i in V a linear equation relates X i to a subset of V (excluding X i ) and its error term E i ; the variables that do not appear in the equation for X i are assumed to have coefficients fixed at zero. We assume that the error terms are jointly independent (although in what follows, this assumption can easily be relaxed.) Associated with each such set of equations is a direct...
Generalization of the Tetrad Representation Theorem
"... Abstract. The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and su cient for the vanishing of tetrad di erences in a linear correlation structure. This note simpli es their proof and generalizes the theorem. This generalization can ..."
Abstract
 Add to MetaCart
Abstract. The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and su cient for the vanishing of tetrad di erences in a linear correlation structure. This note simpli es their proof and generalizes the theorem. This generalization can strengthen procedures used to search for structural equation models for large data sets.