Results 1 
4 of
4
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
Bayesian learning of measurement and structural models
 23rd International Conference on Machine Learning
, 2006
"... We present a Bayesian search algorithm for learning the structure of latent variable models of continuous variables. We stress the importance of applying search operators designed especially for the parametric family used in our models. This is performed by searching for subsets of the observed vari ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We present a Bayesian search algorithm for learning the structure of latent variable models of continuous variables. We stress the importance of applying search operators designed especially for the parametric family used in our models. This is performed by searching for subsets of the observed variables whose covariance matrix can be represented as a sum of a matrix of low rank and a diagonal matrix of residuals. The resulting search procedure is relatively efficient, since the main search operator has a branch factor that grows linearly with the number of variables. The resulting models are often simpler and give a better fit than models based on generalizations of factor analysis or those derived from standard hillclimbing methods. 1.
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
The New Washdown Algorithm
"... There are known methods for discovering pure linear measurement models given as input just the joint distribution of the observed variables and assumptions about the number of pure indicators per latent and normality. This paper extends this approach by describing a variation that is more computatio ..."
Abstract
 Add to MetaCart
There are known methods for discovering pure linear measurement models given as input just the joint distribution of the observed variables and assumptions about the number of pure indicators per latent and normality. This paper extends this approach by describing a variation that is more computationally ecient, as well as providing a proof of its correctness.