Results 1 
7 of
7
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
The TETRAD Project: Constraint Based Aids to Causal Model Specification
 MULTIVARIATE BEHAVIORAL RESEARCH
"... ..."
Glymour: Linearity properties of Bayes nets with binary variables
 Uncertainty in Artificial Intelligence: Proceedings of the 17th Conference (UAI2001
, 2001
"... It is “well known ” that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of “instrumental variables ” sometimes permits an estimation ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
It is “well known ” that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of “instrumental variables ” sometimes permits an estimation of the influence of one variable on another even when the association between the variables may be confounded by unobserved common causes; (3) the association (or conditional probability distribution of one variable given another) of two variables connected by a path or pair of paths with a single common vertex (a trek) can be computed directly from the parameter values associated with each edge in the trek; (4) the association of two variables produced by multiple treks can be computed from the parameters associated with each trek; and (5) the independence of two variables conditional on a third implies the corresponding independence of the sums of the variables over all units conditional on the sums over all
Generalized measurement models
, 2004
"... Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and so ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and sometimes weaker than those used in practice by, for instance, econometricians, psychometricians, social scientists and in many other fields where latent variable models are important and tools such as factor analysis are applicable. The goal is automated knowledge discovery: identifying latent variables that can be used across diferent applications and causal models and throw new insights over a data generating process. Our approach is evaluated throught simulations and three realworld cases.
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
Identification and likelihood inference for recursive linear models with correlated errors
, 2007
"... In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by recursive systems of linear structural equations. Such models appear in particular in seemingly unrelated regressions, structural equation modelling, simultaneous equati ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by recursive systems of linear structural equations. Such models appear in particular in seemingly unrelated regressions, structural equation modelling, simultaneous equation systems, and in Gaussian graphical modelling. We show that recursive linear models that are ‘bowfree’ are wellbehaved statistical models, namely, they are everywhere identifiable and form curved exponential families. Here, ‘bowfree ’ refers to models satisfying the condition that if a variable x occurs in the structural equation for y, then the errors for x and y are uncorrelated. For the computation of maximum likelihood estimates in ‘bowfree ’ recursive linear models we introduce the Residual Iterative Conditional Fitting (RICF) algorithm. Compared to existing algorithms RICF is easily implemented requiring only least squares computations, has clear convergence properties, and finds parameter estimates in closed form whenever possible. 1
Submitted to the Annals of Statistics TREK SEPARATION FOR GAUSSIAN GRAPHICAL MODELS
, 812
"... Gaussian graphical models are semialgebraic subsets of the cone of positive definite covariance matrices. Submatrices with low rank correspond to generalizations of conditional independence constraints on collections of random variables. We give a precise graphtheoretic characterization of when su ..."
Abstract
 Add to MetaCart
Gaussian graphical models are semialgebraic subsets of the cone of positive definite covariance matrices. Submatrices with low rank correspond to generalizations of conditional independence constraints on collections of random variables. We give a precise graphtheoretic characterization of when submatrices of the covariance matrix have small rank for a general class of mixed graphs that includes directed acyclic and undirected graphs as special cases. Our new trek separation criterion generalizes the familiar dseparation criterion. Proofs are based on the trek rule, the resulting matrix factorizations, and classical theorems of algebraic combinatorics on the expansions of determinants of path polynomials.