Results 1  10
of
11
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
Generalized measurement models
, 2004
"... document without permission of its author may be prohibited by law. ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
document without permission of its author may be prohibited by law.
Discovery of latent structures: Experience with the CoIL challenge 2000 data set
 Journal of Systems Science and Complexity
, 2008
"... Abstract The authors present a case study to demonstrate the possibility of discovering complex and interesting latent structures using hierarchical latent class (HLC) models. A similar effort was made earlier by Zhang (2002), but that study involved only small applications with 4 or 5 observed vari ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract The authors present a case study to demonstrate the possibility of discovering complex and interesting latent structures using hierarchical latent class (HLC) models. A similar effort was made earlier by Zhang (2002), but that study involved only small applications with 4 or 5 observed variables and no more than 2 latent variables due to the lack of efficient learning algorithms. Significant progress has been made since then on algorithmic research, and it is now possible to learn HLC models with dozens of observed variables. This allows us to demonstrate the benefits of HLC models more convincingly than before. The authors have successfully analyzed the CoIL Challenge 2000 data set using HLC models. The model obtained consists of 22 latent variables, and its structure is intuitively appealing. It is exciting to know that such a large and meaningful latent structure can be automatically inferred from data. Key words Bayesian networks, case study, latent structure discovery, learning. 1
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
New dseparation identification results for learning continuous latent variable models
 Proceedings of the 22nd Interational Conference in Machine Learning
, 2005
"... Learning the structure of graphical models is an important task, but one of considerable difficulty when latent variables are involved. Because conditional independences using hidden variables cannot be directly observed, one has to rely on alternative methods to identify the dseparations that defi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Learning the structure of graphical models is an important task, but one of considerable difficulty when latent variables are involved. Because conditional independences using hidden variables cannot be directly observed, one has to rely on alternative methods to identify the dseparations that define the graphical structure. This paper describes new distributionfree techniques for identifying dseparations in continuous latent variable models when nonlinear dependencies are allowed among hidden variables. 1.
QuartetBased Learning of Hierarchical Latent Class Models: Discovery of Shallow Latent Variables
"... Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are hidden. The currently most efficient algorithm for learning HLC models can deal with only a few dozen observed variables. While this is sufficient for some applications ..."
Abstract
 Add to MetaCart
Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are hidden. The currently most efficient algorithm for learning HLC models can deal with only a few dozen observed variables. While this is sufficient for some applications, more efficient algorithms are needed for domains with, e.g., hundreds of observed variables. With this demand in mind, we explore quartetbased methods. The basic idea comes from phylogenetic tree reconstruction: One first learn submodels for quartets — groups of four observed variables — and then derive an overall model from those quartet submodels. As the first step in the new direction, this paper assumes that there is a way to find the “true ” submodel for any quartet and investigate how to identify shallow latent variables efficiently by using the minimum number of quartet submodels. By shallow latent variables, we mean latent variables that are connected to at least one observed variable. 1
Journal X (2005) XXXX Submitted XX/XX; Published XX/XX Learning the Structure of Linear Latent Variable Models
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with factor analysis over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for nonlinear systems.
QuartetBased Learning of Hierarchical Latent Class Models:
"... Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are hidden. The currently most e#cient algorithm for learning HLC models can deal with only a few dozen observed variables. ..."
Abstract
 Add to MetaCart
Hierarchical latent class (HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are hidden. The currently most e#cient algorithm for learning HLC models can deal with only a few dozen observed variables.
QuartetBased Learning of Shallow Latent Variables
"... Hierarchical latent class(HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are hidden. We explore the following twostage approach for learning HLC models: One first identifies the shallow latent variables – latent variables adjacent to observed va ..."
Abstract
 Add to MetaCart
Hierarchical latent class(HLC) models are treestructured Bayesian networks where leaf nodes are observed while internal nodes are hidden. We explore the following twostage approach for learning HLC models: One first identifies the shallow latent variables – latent variables adjacent to observed variables – and then determines the structure among the shallow and possibly some other “deep ” latent variables. This paper is concerned with the first stage. In earlier work, we have shown how shallow latent variables can be correctly identified from quartet submodels if one could learn them without errors. In reality, one does make errors when learning quartet submodels. In this paper, we study the probability of such errors and propose a method that can reliably identify shallow latent variables despite of the errors. 1