Results 1 
5 of
5
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
New dseparation identification results for learning continuous latent variable models
 Proceedings of the 22nd Interational Conference in Machine Learning
, 2005
"... Learning the structure of graphical models is an important task, but one of considerable difficulty when latent variables are involved. Because conditional independences using hidden variables cannot be directly observed, one has to rely on alternative methods to identify the dseparations that defi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Learning the structure of graphical models is an important task, but one of considerable difficulty when latent variables are involved. Because conditional independences using hidden variables cannot be directly observed, one has to rely on alternative methods to identify the dseparations that define the graphical structure. This paper describes new distributionfree techniques for identifying dseparations in continuous latent variable models when nonlinear dependencies are allowed among hidden variables. 1.
Referent tracking for treatment optimisation in schizophrenic patients: A case study in applying philosophical ontology to diagnostic algorithms
, 2006
"... The IPAP Schizophrenia Algorithm was originally designed in the form of a flowchart to help physicians optimise the treatment of schizophrenic patients in the spirit of guidelinebased medicine. We take this algorithm as our starting point in investigating how artifacts of this sort can benefit fro ..."
Abstract
 Add to MetaCart
The IPAP Schizophrenia Algorithm was originally designed in the form of a flowchart to help physicians optimise the treatment of schizophrenic patients in the spirit of guidelinebased medicine. We take this algorithm as our starting point in investigating how artifacts of this sort can benefit from the facilities of highquality ontologies. The IPAP algorithm exists thus far only in a form suitable for use by human beings. We draw on the resources of Basic Formal Ontology (BFO) in order to show how such an algorithm can be enhanced in such a way that it can be used in Semantic Web and related applications. We found that BFO provides a framework that is able to capture in a rigorous way all the types of entities represented in the IPAP Schizophrenia Algorithm in way which yields a computational tool that can be used by software agents to perform monitoring and control of schizophrenic patients. We discuss the issues involved in building an application ontology for this purpose, issues which are important
Learning Associations by Discrete Measurement Models ABSTRACT
"... Discovering interesting associations in discrete databases is a key task in data mining. Association rules and graphical models among observed variables are standard tools in this analysis, but in problems where associations are due to hidden common causes not recorded in the database, the resulting ..."
Abstract
 Add to MetaCart
Discovering interesting associations in discrete databases is a key task in data mining. Association rules and graphical models among observed variables are standard tools in this analysis, but in problems where associations are due to hidden common causes not recorded in the database, the resulting models are overly complex and offer no picture of the causes of such dependencies. For instance, the pattern of answers in a large marketing survey might be explained by a few latent traits of the population. A large set of association rules might offer little insight on this process. Instead, one can model the observed variables as measurements of latent concepts, such as in discrete principal component analysis. However, discrete PCA and its variations rely on the assumption that latents are independent. While such an assumption might be reasonable in, e.g., black box models for classification, it makes little sense if the goal is understanding the real causes for the associations. We present in this paper a method for finding hidden common causes that explain observed associations of subsets of the given variables without imposing independence constraints over latents. Variables should be binary or ordinal.