Results 11  20
of
44
Quantifier elimination for statistical problems
 In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI99
, 1999
"... Recent improvements on Tarski's procedure for quantifier elimination in the first order theory of real numbers makes it feasible to solve small instances of the following problems completely automatically: 1. listing all equality and inequality constraints implied by a graphical model with hidden va ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Recent improvements on Tarski's procedure for quantifier elimination in the first order theory of real numbers makes it feasible to solve small instances of the following problems completely automatically: 1. listing all equality and inequality constraints implied by a graphical model with hidden variables. 2. Comparing graphical models with hidden variables (i.e., model equivalence, inclusion, and overlap). 3. Answering questions about the identification of a model or portion of a model, and about bounds on quantities derived from a model. 4. Determining whether an independence assertion is implied from a given set of independence assertions. We discuss the foundations of quantifier elimination and demonstrate its application to these problems. 1
On causally asymmetric versions of Occam’s Razor and their relation to thermodynamics
, 2007
"... and their relation to thermodynamics ..."
Finding Optimal Bayesian Network Given a SuperStructure
"... Classical approaches used to learn Bayesian network structure from data have disadvantages in terms of complexity and lower accuracy of their results. However, a recent empirical study has shown that a hybrid algorithm improves sensitively accuracy and speed: it learns a skeleton with an independenc ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Classical approaches used to learn Bayesian network structure from data have disadvantages in terms of complexity and lower accuracy of their results. However, a recent empirical study has shown that a hybrid algorithm improves sensitively accuracy and speed: it learns a skeleton with an independency test (IT) approach and constrains on the directed acyclic graphs (DAG) considered during the searchandscore phase. Subsequently, we theorize the structural constraint by introducing the concept of superstructure S, which is an undirected graph that restricts the search to networks whose skeleton is a subgraph of S. We develop a superstructure constrained optimal search (COS): its time complexity is upper bounded by O(γm n), where γm < 2 depends on the maximal degree m of S. Empirically, complexity depends on the average degree ˜m and sparse structures allow larger graphs to be calculated. Our algorithm is faster than an optimal search by several orders and even finds more accurate results when given a sound superstructure. Practically, S can be approximated by IT approaches; significance level of the tests controls its sparseness, enabling to control the tradeoff between speed and accuracy. For incomplete superstructures, a greedily postprocessed version (COS+) still enables to significantly outperform other heuristic searches. Keywords: subset Bayesian networks, structure learning, optimal search, superstructure, connected 1.
Separation An Completeness Properties For Amp Chain Graph Markov Models
 Ann. Statist
, 2000
"... This paper introduces ..."
Strong Faithfulness and Uniform Consistency in Causal Inference
 Proceedings of the 19th Conference in Uncertainty in Artificial Intelligence
, 2003
"... A fundamental question in causal inference is whether it is possible to reliably infer the manipulation effects from observational data. There are a variety of senses of asymptotic reliability in the statistical literature, among which the most commonly discussed frequentist notions are pointwise co ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A fundamental question in causal inference is whether it is possible to reliably infer the manipulation effects from observational data. There are a variety of senses of asymptotic reliability in the statistical literature, among which the most commonly discussed frequentist notions are pointwise consistency and uniform consistency (see, e.g. Bickel, Doksum [2001]). Uniform consistency is in general preferred to pointwise consistency because the former allows us to control the worst case error bounds with a finite sample size. In the sense of pointwise consistency, several reliable causal inference algorithms have been established under the Markov and Faithfulness assumptions [Pearl 2000, Spirtes et al. 2001]. In the sense of uniform consistency, however, reliable causal inference is impossible under the two assumptions when time order is unknown and/or latent confounders are present [Robins et al. 2000]. In this paper we present two natural generalizations of the Faithfulness assumption in the context of structural equation models, under which we show that the typical algorithms in the literature are uniformly consistent with or without modifications even when the time order is unknown. We also discuss the situation where latent confounders may be present and the sense in which the Faithfulness assumption is a limiting case of the stronger assumptions.
Causal reasoning with ancestral graphs
, 2008
"... Causal reasoning is primarily concerned with what would happen to a system under external interventions. In particular, we are often interested in predicting the probability distribution of some random variables that would result if some other variables were forced to take certain values. One promin ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Causal reasoning is primarily concerned with what would happen to a system under external interventions. In particular, we are often interested in predicting the probability distribution of some random variables that would result if some other variables were forced to take certain values. One prominent approach to tackling this problem is based on causal Bayesian networks, using directed acyclic graphs as causal diagrams to relate postintervention probabilities to preintervention probabilities that are estimable from observational data. However, such causal diagrams are seldom fully testable given observational data. In consequence, many causal discovery algorithms based on datamining can only output an equivalence class of causal diagrams (rather than a single one). This paper is concerned with causal reasoning given an equivalence class of causal diagrams, represented by a (partial) ancestral graph. We present two main results. The first result extends Pearl (1995)’s celebrated docalculus to the context of ancestral graphs. In the second result, we focus on a key component of Pearl’s calculus—the property of invariance under interventions, and give stronger graphical conditions for this property than those implied by the first result. The second result also improves the earlier, similar results due to Spirtes et al. (1993).
Inferring dynamic genetic networks with low order independencies
 Statistical Applications in Genetics and Molecular Biology 8
, 2009
"... In this paper, we propose a novel inference method for dynamic genetic networks which makes it possible to deal with a number of time measurements n much smaller than the number of genes p. The approach is based on the concept of low order conditional dependence graph which we extend here to the cas ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper, we propose a novel inference method for dynamic genetic networks which makes it possible to deal with a number of time measurements n much smaller than the number of genes p. The approach is based on the concept of low order conditional dependence graph which we extend here to the case of Dynamic Bayesian Networks. Most of our results are based on the theory of graphical models associated with Directed Acyclic Graphs (DAGs). In this way, we define a DAG ˜ G which describes exactly the full order conditional dependencies given the past of the process. Then, to cope with the large p and small n estimation case, we propose to approximate DAG ˜ G by considering low order conditional independencies. We introduce partial q th order conditional dependence DAGs and analyze their probabilistic properties. In general, DAGs G (q) differ from ˜ G but still reflect relevant dependence facts for sparse networks such as genetic networks. By using this approximation, we set out a nonBayesian inference method and demonstrate the effectiveness of this approach on both simulated and real data analysis. The inference procedure is implemented in the R package ’G1DBN ’ which is available from the CRAN archive.
Learning gaussian graphical models of gene networks with false discovery rate control
 In 6th European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics
"... Abstract. In many cases what matters is not whether a false discovery is made or not but the expected proportion of false discoveries among all the discoveries made, i.e. the socalled false discovery rate (FDR). We present an algorithm aiming at controlling the FDR of edges when learning Gaussian g ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. In many cases what matters is not whether a false discovery is made or not but the expected proportion of false discoveries among all the discoveries made, i.e. the socalled false discovery rate (FDR). We present an algorithm aiming at controlling the FDR of edges when learning Gaussian graphical models (GGMs). The algorithm is particularly suitable when dealing with more nodes than samples, e.g. when learning GGMs of gene networks from gene expression data. We illustrate this on the Rosetta compendium [8]. 1
Independence for Full Conditional Measures, Graphoids and Bayesian Networks
, 2007
"... This paper examines definitions of independence for events and variables in the context of full conditional measures; that is, when conditional probability is a primitive notion and conditioning is allowed on null events. Several independence concepts are evaluated with respect to graphoid propertie ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This paper examines definitions of independence for events and variables in the context of full conditional measures; that is, when conditional probability is a primitive notion and conditioning is allowed on null events. Several independence concepts are evaluated with respect to graphoid properties; we show that properties of weak union, contraction and intersection may fail when null events are present. We propose a concept of “full” independence, characterize the form of a full conditional measure under full independence, and suggest how to build a theory of Bayesian networks that accommodates null events.