Results 1 
6 of
6
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
Spectral Methods for Learning Multivariate Latent Tree Structure
"... This work considers the problem of learning the structure of multivariate linear tree models, which include a variety of directed tree graphical models with continuous, discrete, and mixed latent variables such as linearGaussian models, hidden Markov models, Gaussian mixture models, and Markov evol ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This work considers the problem of learning the structure of multivariate linear tree models, which include a variety of directed tree graphical models with continuous, discrete, and mixed latent variables such as linearGaussian models, hidden Markov models, Gaussian mixture models, and Markov evolutionary trees. The setting is one where we only have samples from certain observed variables in the tree, and our goal is to estimate the tree structure (i.e., the graph of how the underlying hidden variables are connected to each other and to the observed variables). We propose the Spectral Recursive Grouping algorithm, an efficient and simple bottomup procedure for recovering the tree structure from independent samples of the observed variables. Our finite sample size bounds for exact recovery of the tree structure reveal certain natural dependencies on underlying statistical and structural properties of the underlying joint distribution. Furthermore, our sample complexity guarantees have no explicit dependence on the dimensionality of the observed variables, making the algorithm applicable to many highdimensional settings. At the heart of our algorithm is a spectral quartet test for determining the relative topology of a quartet of variables from secondorder statistics. 1
Generalized measurement models
, 2004
"... Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and so ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and sometimes weaker than those used in practice by, for instance, econometricians, psychometricians, social scientists and in many other fields where latent variable models are important and tools such as factor analysis are applicable. The goal is automated knowledge discovery: identifying latent variables that can be used across diferent applications and causal models and throw new insights over a data generating process. Our approach is evaluated throught simulations and three realworld cases.
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
Generalization of the Tetrad Representation Theorem
 Preliminary Papers of the Fifth International Workshop on Artificial Intelligence and
, 1993
"... The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and sufficient for the vanishing of tetrad differences in a linear correlation structure. This note simplifies their proof and generalizes the theorem. This generalization can stren ..."
Abstract
 Add to MetaCart
The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and sufficient for the vanishing of tetrad differences in a linear correlation structure. This note simplifies their proof and generalizes the theorem. This generalization can strengthen procedures used to search for structural equation models for large data sets.  1  1 Introduction In a linear "structural equation" model, it is assumed that there is a set of variables V , and for each variable X i in V , there is a unique associated error term E i with nonzero variance. For each variable X i in V a linear equation relates X i to a subset of V (excluding X i ) and its error term E i ; the variables that do not appear in the equation for X i are assumed to have coefficients fixed at zero. We assume that the error terms are jointly independent (although in what follows, this assumption can easily be relaxed.) Associated with each such set of equations is a direct...
Generalization of the Tetrad Representation Theorem
"... Abstract. The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and su cient for the vanishing of tetrad di erences in a linear correlation structure. This note simpli es their proof and generalizes the theorem. This generalization can ..."
Abstract
 Add to MetaCart
Abstract. The tetrad representation theorem, due to Spirtes, Glymour, and Scheines (1993), gives a graphical condition necessary and su cient for the vanishing of tetrad di erences in a linear correlation structure. This note simpli es their proof and generalizes the theorem. This generalization can strengthen procedures used to search for structural equation models for large data sets.