Results 1 
6 of
6
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
Bayesian learning of measurement and structural models
 23rd International Conference on Machine Learning
, 2006
"... We present a Bayesian search algorithm for learning the structure of latent variable models of continuous variables. We stress the importance of applying search operators designed especially for the parametric family used in our models. This is performed by searching for subsets of the observed vari ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We present a Bayesian search algorithm for learning the structure of latent variable models of continuous variables. We stress the importance of applying search operators designed especially for the parametric family used in our models. This is performed by searching for subsets of the observed variables whose covariance matrix can be represented as a sum of a matrix of low rank and a diagonal matrix of residuals. The resulting search procedure is relatively efficient, since the main search operator has a branch factor that grows linearly with the number of variables. The resulting models are often simpler and give a better fit than models based on generalizations of factor analysis or those derived from standard hillclimbing methods. 1.
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
Journal X (2005) XXXX Submitted XX/XX; Published XX/XX Learning the Structure of Linear Latent Variable Models
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with factor analysis over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for nonlinear systems.
The New Washdown Algorithm
"... There are known methods for discovering pure linear measurement models given as input just the joint distribution of the observed variables and assumptions about the number of pure indicators per latent and normality. This paper extends this approach by describing a variation that is more computatio ..."
Abstract
 Add to MetaCart
There are known methods for discovering pure linear measurement models given as input just the joint distribution of the observed variables and assumptions about the number of pure indicators per latent and normality. This paper extends this approach by describing a variation that is more computationally ecient, as well as providing a proof of its correctness.
Use of SEM Programs to Precisely Measure Scale Reliability
"... Summary. It is first pointed out that most often used reliability coefficient α and onefactor model based reliability ρ are seriously biased when unique factors are covariated. In the case, the α is no longer a lower bound of the true reliability. Use of Bollen’s formula (Bollen 1980) on reliabilit ..."
Abstract
 Add to MetaCart
Summary. It is first pointed out that most often used reliability coefficient α and onefactor model based reliability ρ are seriously biased when unique factors are covariated. In the case, the α is no longer a lower bound of the true reliability. Use of Bollen’s formula (Bollen 1980) on reliability is highly recommended. A webbased program termed “STERA ” is developed which can make stepwise reliability analysis very easily with the help of factor analysis and structural equation modeling.