Results 1 
8 of
8
Computing Maximum Likelihood Estimates in Recursive . . .
, 2008
"... In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by a recursive (or acyclic) system of linear structural equations. These linear models have a long tradition and appear in seemingly unrelated regressions, structural equat ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by a recursive (or acyclic) system of linear structural equations. These linear models have a long tradition and appear in seemingly unrelated regressions, structural equation modelling, and approaches to causal inference. They are also related to Gaussian graphical models via a classical representation known as a path diagram. Despite the models ’ long history, a number of problems remain open. In this paper, we address the problem of computing maximum likelihood estimates in the subclass of ‘bowfree ’ recursive linear models. The term ‘bowfree ’ refers to the condition that the errors for variables i and j be uncorrelated if variable i occurs in the structural equation for variable j. We introduce a new algorithm, termed Residual Iterative Conditional Fitting (RICF), that can be implemented using only least squares computations. In contrast to existing algorithms, RICF has clear convergence properties and finds parameter estimates in closed form whenever possible.
P.: A transformational characterization of markov equivalence for directed acyclic graphs with latent variables
 In: Proc. of the 21st Conference on Uncertainty in Artificial Intelligence (UAI
, 2005
"... The conditional independence relations present in a data set usually admit multiple causal explanations — typically represented by directed graphs — which are Markov equivalent in that they entail the same conditional independence relations among the observed variables. Markov equivalence between di ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The conditional independence relations present in a data set usually admit multiple causal explanations — typically represented by directed graphs — which are Markov equivalent in that they entail the same conditional independence relations among the observed variables. Markov equivalence between directed acyclic graphs (DAGs) has been characterized in various ways, each of which has been found useful for certain purposes. In particular, Chickering’s transformational characterization is useful in deriving properties shared by Markov equivalent DAGs, and, with certain generalization, is needed to justify a search procedure over Markov equivalence classes, known as the GES algorithm. Markov equivalence between DAGs with latent variables has also been characterized, in the spirit of Verma and Pearl (1990), via maximal ancestral graphs (MAGs). The latter can represent the observable conditional independence relations as well as some causal features of DAG models with latent variables. However, no characterization of Markov equivalent MAGs is yet available that is analogous to the transformational characterization for Markov equivalent DAGs. The main contribution of the current paper is to establish such a characterization for directed MAGs, which we expect will have similar uses as Chickering’s characterization does for DAGs. 1
Maximum likelihood fitting of acyclic directed mixed graphs to binary data
 Proceedings of the 26th International Conference on Uncertainty in Artificial Intelligence
, 2010
"... mixed graphs to binary data ..."
Iterative conditional fitting for discrete chain graph models
 In COMPSTAT 2008 – Proceedings in Computational Statistics 93–104
, 2008
"... Abstract. ‘Iterative conditional fitting ’ is a recently proposed algorithm that can be used for maximization of the likelihood function in marginal independence models for categorical data. This paper describes a modification of this algorithm, which allows one to compute maximum likelihood estimat ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. ‘Iterative conditional fitting ’ is a recently proposed algorithm that can be used for maximization of the likelihood function in marginal independence models for categorical data. This paper describes a modification of this algorithm, which allows one to compute maximum likelihood estimates in a class of chain graph models for categorical data. The considered discrete chain graph models are defined using conditional independence relations arising in recursive multivariate regressions with correlated errors. This Markov interpretation of the chain graph is consistent with treating the graph as a path diagram and differs from other interpretations known as the LWF and AMP Markov properties.
Learning Linear Bayesian Networks with Latent Variables
"... This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from loworder observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underly ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from loworder observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underlying directed acyclic graph (DAG) between observed and unobserved variables in the network, and it is satisfied by many natural families of DAGs that include multilevel DAGs, DAGs with effective depth one, as well as certain families of polytrees. 1.
A Characterization of Markov Equivalence Classes for Directed Acyclic Graphs with Latent Variables
"... Different directed acyclic graphs (DAGs) may be Markov equivalent in the sense that they entail the same conditional independence relations among the observed variables. Meek (1995) characterizes Markov equivalence classes for DAGs (with no latent variables) by presenting a set of orientation rules ..."
Abstract
 Add to MetaCart
Different directed acyclic graphs (DAGs) may be Markov equivalent in the sense that they entail the same conditional independence relations among the observed variables. Meek (1995) characterizes Markov equivalence classes for DAGs (with no latent variables) by presenting a set of orientation rules that can correctly identify all arrow orientations shared by all DAGs in a Markov equivalence class, given a member of that class. For DAG models with latent variables, maximal ancestral graphs (MAGs) provide a neat representation that facilitates model search. Earlier work (Ali et al. 2005) has identified a set of orientation rules sufficient to construct all arrowheads common to a Markov equivalence class of MAGs. In this paper, we provide extra rules sufficient to construct all common tails as well. We end up with a set of orientation rules sound and complete for identifying commonalities across a Markov equivalence class of MAGs, which is particularly useful for causal inference. 1
A Characterization of Markov Equivalence Classes for Directed Acyclic Graphs with Latent Variables
"... Different directed acyclic graphs (DAGs) may be Markov equivalent in the sense that they entail the same conditional independence relations among the observed variables. Meek (1995) characterizes Markov equivalence classes for DAGs (with no latent variables) by presenting a set of orientation rules ..."
Abstract
 Add to MetaCart
Different directed acyclic graphs (DAGs) may be Markov equivalent in the sense that they entail the same conditional independence relations among the observed variables. Meek (1995) characterizes Markov equivalence classes for DAGs (with no latent variables) by presenting a set of orientation rules that can correctly identify all arrow orientations shared by all DAGs in a Markov equivalence class, given a member of that class. For DAG models with latent variables, maximal ancestral graphs (MAGs) provide a neat representation that facilitates model search. Earlier work (Ali et al. 2005) has identified a set of orientation rules sufficient to construct all arrowheads common to a Markov equivalence class of MAGs. In this paper, we provide extra rules sufficient to construct all common tails as well. We end up with a set of orientation rules sound and complete for identifying commonalities across a Markov equivalence class of MAGs, which is particularly useful for causal inference. 1