Results 11  20
of
106
Binary models for marginal independence
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B
, 2005
"... A number of authors have considered multivariate Gaussian models for marginal independence. In this paper we develop models for binary data with the same independence structure. The models can be parameterized based on Möbius inversion and maximum likelihood estimation can be performed using a versi ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
A number of authors have considered multivariate Gaussian models for marginal independence. In this paper we develop models for binary data with the same independence structure. The models can be parameterized based on Möbius inversion and maximum likelihood estimation can be performed using a version of the Iterated Conditional Fitting algorithm. The approach is illustrated on a simple example. Relations to multivariate logistic and dependence ratio models are discussed.
Probabilities of Causation: Bounds and Identification
 Annals of Mathematics and Artificial Intelligence
, 2000
"... This paper deals with the problem of estimating the probability of causation, that is, the probability that one event was the real cause of another, in a given scenario. Starting from structuralsemantical definitions of the probabilities of necessary or sufficient causation (or both), we show h ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
This paper deals with the problem of estimating the probability of causation, that is, the probability that one event was the real cause of another, in a given scenario. Starting from structuralsemantical definitions of the probabilities of necessary or sufficient causation (or both), we show how to bound these quantities from data obtained in experimental and observational studies, under general assumptions concerning the datagenerating process. In particular, we strengthen the results of Pearl (1999) by presenting sharp bounds based on combined experimental and nonexperimental data under no process assumptions, as well as under the mild assumptions of exogeneity (no confounding) and monotonicity (no prevention). These results delineate more precisely the basic assumptions that must be made before statistical measures such as the excessriskratio could be used for assessing attributional quantities such as the probability of causation. 1
Multiple testing and error control in Gaussian graphical model selection
 Statistical Science
"... Abstract. Graphical models provide a framework for exploration of multivariate dependence patterns. The connection between graph and statistical model is made by identifying the vertices of the graph with the observed variables and translating the pattern of edges in the graph into a pattern of cond ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. Graphical models provide a framework for exploration of multivariate dependence patterns. The connection between graph and statistical model is made by identifying the vertices of the graph with the observed variables and translating the pattern of edges in the graph into a pattern of conditional independences that is imposed on the variables ’ joint distribution. Focusing on Gaussian models, we review classical graphical models. For these models the defining conditional independences are equivalent to vanishing of certain (partial) correlation coefficients associated with individual edges that are absent from the graph. Hence, Gaussian graphical model selection can be performed by multiple testing of hypotheses about vanishing (partial) correlation coefficients. We show and exemplify how this approach allows one to perform model selection while controlling error rates for incorrect edge inclusion. Key words and phrases: Acyclic directed graph, Bayesian network, bidirected graph, chain graph, concentration graph, covariance graph, DAG, graphical model, multiple testing, undirected graph. 1.
Generalized measurement models
, 2004
"... Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and so ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Given a set of random variables, it is often the case that their associations can be explained by hidden common causes. We present a set of welldefined assumptions and a provably correct algorithm that allow us to identify some of such hidden common causes. The assumptions are fairly general and sometimes weaker than those used in practice by, for instance, econometricians, psychometricians, social scientists and in many other fields where latent variable models are important and tools such as factor analysis are applicable. The goal is automated knowledge discovery: identifying latent variables that can be used across diferent applications and causal models and throw new insights over a data generating process. Our approach is evaluated throught simulations and three realworld cases.
ObjectOriented Graphical Representations of Complex Patterns of Evidence
, 2007
"... We reconsider two graphical aids to handling complex mixed masses of evidence in a legal case: Wigmore charts and Bayesian networks. Our aim is to forge a synthesis of their best features, and to develop this further to overcome remaining limitations. One important consideration is the multilayered ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We reconsider two graphical aids to handling complex mixed masses of evidence in a legal case: Wigmore charts and Bayesian networks. Our aim is to forge a synthesis of their best features, and to develop this further to overcome remaining limitations. One important consideration is the multilayered nature of a complex case, which can involve direct evidence, ancillary evidence, evidence about ancillary evidence... all of a number of different kinds. If all these features are represented in one diagram, the result can be messy and hard to interpret. In addition there are often recurrent features and patterns of evidence and evidential relations, e.g. credibility processes or match identification (DNA, eyewitness evidence,...) that may appear, in identical or similar form, at many different places within the same network, or within several different networks, and it is wasteful to model these all individually. The recently introduced technology of “objectoriented Bayesian networks ” suggests a way of dealing with these problems. Any network can itself contain instances of other networks, the details of which can be hidden from view until information on their detailed structure is desired. Moreover, generic networks to represent recurrent patterns of evidence can be constructed once and for all, and copied or edited for reuse as needed. We describe the potential of this mode of description to simplify the construction and display of complex legal cases. To facilitate our narrative the celebrated Sacco and Vanzetti murder case is used to illustrate the various methods discussed.
Of starships and Klingons: Bayesian logic for 23rd century
 Proc. UAI05
, 2005
"... Intelligent systems in an open world must reason about many interacting entities related to each other in diverse ways and having uncertain features and relationships. Traditional probabilistic languages lack the expressive power to handle relational domains. Classical firstorder logic is sufficien ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Intelligent systems in an open world must reason about many interacting entities related to each other in diverse ways and having uncertain features and relationships. Traditional probabilistic languages lack the expressive power to handle relational domains. Classical firstorder logic is sufficiently expressive, but lacks a coherent plausible reasoning capability. Recent years have seen the emergence of a variety of approaches to integrating firstorder logic, probability, and machine learning. This paper presents Multientity Bayesian networks (MEBN), a formal system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary
Probabilities of causation: Bounds and identi cation
 In Proceedings of the Sixteenth Conference on Uncertainty in Arti cial Intelligence
, 2000
"... This paper deals with the problem of estimating the probability that one event was a cause of another in a given scenario. Using structuralsemantical de nitions of the probabilities of necessary or su cient causation (or both), we show howto optimally bound these quantities from data obtained in ex ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
This paper deals with the problem of estimating the probability that one event was a cause of another in a given scenario. Using structuralsemantical de nitions of the probabilities of necessary or su cient causation (or both), we show howto optimally bound these quantities from data obtained in experimental and observational studies, making minimal assumptions concerning the datagenerating process. In particular, we strengthen the results of Pearl (1999) by weakening the datageneration assumptions and deriving theoretically sharp bounds on the probabilities of causation. These results delineate precisely how empirical data can be used both in settling questions of attribution and in solving attributionrelated problems of decision making. 1
Mendelian Randomisation: Why Epidemiology needs a Formal Language for Causality
"... abstract. For ethical or practical reasons, randomised cotrolled trials are not always an option to test epidemiological hypotheses. Epidemiologists are consequently faced with the problem of how to make causal inferences from observational data, particularly when confounding is present and not full ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
abstract. For ethical or practical reasons, randomised cotrolled trials are not always an option to test epidemiological hypotheses. Epidemiologists are consequently faced with the problem of how to make causal inferences from observational data, particularly when confounding is present and not fully understood. The method of instrumental variables can be exploited for this purpose in a process known as Mendelian randomisation. However, the approach has not been developed to deal satisfactorily with a binary outcome variable in the presence of confounding. This has not been properly understood in the medical literature. We show that by defining the problem using a formal causal language, the difficulties can be identified and misinterpretations avoided. 1
Interventions and causal inference
 Philosophy of Science
, 2007
"... The literature on causal discovery has focused on interventions that involve randomly assigning values to a single variable. But such a randomized intervention is not the only possibility, nor is it always optimal. In some cases it is impossible or it would be unethical to perform such an interventi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The literature on causal discovery has focused on interventions that involve randomly assigning values to a single variable. But such a randomized intervention is not the only possibility, nor is it always optimal. In some cases it is impossible or it would be unethical to perform such an intervention. We provide an account of “hard ” and “soft” interventions, and discuss what they can contribute to causal discovery. We also describe how the choice of the optimal intervention(s) depends heavily on the particular experimental setup and the assumptions that can be made.
Combining experiments to discover linear cyclic models with latent variables
 In AISTATS 2010
, 2010
"... We present an algorithm to infer causal relations between a set of measured variables on the basis of experiments on these variables. The algorithm assumes that the causal relations are linear, but is otherwise completely general: It provides consistent estimates when the true causal structure conta ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We present an algorithm to infer causal relations between a set of measured variables on the basis of experiments on these variables. The algorithm assumes that the causal relations are linear, but is otherwise completely general: It provides consistent estimates when the true causal structure contains feedback loops and latent variables, while the experiments can involve surgical or ‘soft ’ interventions on one or multiple variables at a time. The algorithm is ‘online’ in the sense that it combines the results from any set of available experiments, can incorporate background knowledge and resolves conflicts that arise from combining results from different experiments. In addition we provide a necessary and sufficient condition that (i) determines when the algorithm can uniquely return the true graph, and (ii) can be used to select the next best experiment until this condition is satisfied. We demonstrate the method by applying it to simulated data and the flow cytometry data of Sachs et al (2005). 1