Results 1  10
of
14
Bayesball: The rational pastime (for determining irrelevance and requisite information in belief networks and influence diagrams
 In Uncertainty in Artificial Intelligence
, 1998
"... One of the benefits of belief networks and influence diagrams is that so much knowledge is captured in the graphical structure. In particular, statements of conditional irrelevance (or independence) can be verified in time linear in the size of the graph. To resolve a particular inference query or d ..."
Abstract

Cited by 43 (3 self)
 Add to MetaCart
One of the benefits of belief networks and influence diagrams is that so much knowledge is captured in the graphical structure. In particular, statements of conditional irrelevance (or independence) can be verified in time linear in the size of the graph. To resolve a particular inference query or decision problem, only some of the possible states and probability distributions must be specified, the“requisite information.” This paper presents a new, simple, and efficient “Bayesball ” algorithm which is wellsuited to both new students of belief networks and state of the art implementations. The Bayesball algorithm determines irrelevant sets and requisite information more efficiently than existing methods, and is linear in the size of the graph for belief networks and influence diagrams.
A simple constraintbased algorithm for efficiently mining observational databases for causal relationships
 Data Mining and Knowledge Discovery
, 1997
"... Abstract. This paper presents a simple, efficient computerbased method for discovering causal relationships from databases that contain observational data. Observational data is passively observed, as contrasted with experimental data. Most of the databases available for data mining are observation ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
Abstract. This paper presents a simple, efficient computerbased method for discovering causal relationships from databases that contain observational data. Observational data is passively observed, as contrasted with experimental data. Most of the databases available for data mining are observational. There is great potential for mining such databases to discover causal relationships. We illustrate how observational data can constrain the causal relationships among measured variables, sometimes to the point that we can conclude that one variable is causing another variable. The presentation here is based on a constraintbased approach to causal discovery. A primary purpose of this paper is to present the constraintbased causal discovery method in the simplest possible fashion in order to (1) readily convey the basic ideas that underlie more complex constraintbased causal discovery techniques, and (2) permit interested readers to rapidly program and apply the method to their own databases, as a start toward using more elaborate causal discovery algorithms.
Partial correlation for functional brain interactivity investigation in functional MRI
, 2006
"... ..."
A new inferential test for path models based on directed acyclic graphs. Structural Equation Modeling
"... This article introduces a new inferential test for acyclic structural equation models (SEM) without latent variables or correlated errors. The test is based on the independence relations predicted by the directed acyclic graph of the SEMs, as given by the concept of dseparation. A wide range of dis ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This article introduces a new inferential test for acyclic structural equation models (SEM) without latent variables or correlated errors. The test is based on the independence relations predicted by the directed acyclic graph of the SEMs, as given by the concept of dseparation. A wide range of distributional assumptions and structural functions can be accommodated. No iterative fitting procedures are used, precluding problems involving convergence. Exact probability estimates can be obtained, thus permitting the testing of models with small data sets. Structural equations represent the translation of a hypothesized series of cause–effect relationships between variables into a composite statistical hypothesis concerning patterns of statistical dependencies. The development of an inferential test for such a composite statistical hypothesis (see Bollen, 1989, for a historical summary) has had a large impact on fields of study in which multivariate causal hypotheses cannot be tested through randomized experiments. The various statistical innovations that were spawned by this method have mostly followed the same basic logic. A series of hypothesized causal relationships between the variables are combined to form a directed graph (the path model). This directed graph implies a series of path coefficients, some of which are fixed to some a priori value (usually zero) and the rest of which are free to vary. These free parameters are estimated by minimizing some discrepancy measure such as the maximum likelihood loss function. The predicted variance–covariance matrix, implied by the set of fully parameterized structural equations, is then compared to the sample variance–covariance matrix using a fit statistic that has a known, usually asymptotic, probability distribution. Requests for reprints should be sent to Bill Shipley, Département de Biologie, Université de
Modeling Discrete Interventional Data using Directed Cyclic Graphical Models
"... We outline a representation for discrete multivariate distributions in terms of interventional potential functions that are globally normalized. This representation can be used to model the effects of interventions, and the independence properties encoded in this model can be represented as a direct ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We outline a representation for discrete multivariate distributions in terms of interventional potential functions that are globally normalized. This representation can be used to model the effects of interventions, and the independence properties encoded in this model can be represented as a directed graph that allows cycles. In addition to discussing inference and sampling with this representation, we give an exponential family parametrization that allows parameter estimation to be stated as a convex optimization problem; we also give a convex relaxation of the task of simultaneous parameter and structure learning using group ℓ1regularization. The model is evaluated on simulated data and intracellular flow cytometry data. 1
On Deducing Conditional Independence from dSeparation in Causal Graphs with Feedback
 Journal of Artificial Intelligence Research
, 2000
"... Pearl and Dechter (1996) claimed that the dseparation criterion for conditional independence in acyclic causal networks also applies to networks of discrete variables that have feedback cycles, provided that the variables of the system are uniquely determined by the random disturbances. I show b ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Pearl and Dechter (1996) claimed that the dseparation criterion for conditional independence in acyclic causal networks also applies to networks of discrete variables that have feedback cycles, provided that the variables of the system are uniquely determined by the random disturbances. I show by example that this is not true in general. Some condition stronger than uniqueness is needed, such as the existence of a causal dynamics guaranteed to lead to the unique solution. Causal networks (also known as Bayesian networks or belief networks) are a formalism for representing the joint distribution of a collection of random variables in terms of the conditional distributions for each variable given values for its "parent" variables. The structure of the distribution is represented graphically by a network in which nodes represent variables and arrows are drawn from parent nodes to child nodes. These arrows typically correspond to causal relationships. In the standard formulation, th...
Structure learning in causal cyclic networks
 In JMLR Workshop and Conference Proceedings
, 2010
"... Cyclic graphical models are unnecessary for accurate representation of joint probability distributions, but are often indispensable when a causal representation of variable relationships is desired. For variables with a cyclic causal dependence structure, DAGs are guaranteed not to recover the corre ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Cyclic graphical models are unnecessary for accurate representation of joint probability distributions, but are often indispensable when a causal representation of variable relationships is desired. For variables with a cyclic causal dependence structure, DAGs are guaranteed not to recover the correct causal structure, and therefore may yield false predictions about the outcomes of perturbations (and even inference.) In this paper, we introduce an approach to generalize Bayesian Network structure learning to structures with cyclic dependence. We introduce a structure learning algorithm, prove its performance given reasonable assumptions, and use simulated data to compare its results to the results of standard Bayesian network structure learning. We then propose a modified, heuristic algorithm with more modest data requirements, and test its performance on a reallife dataset from molecular biology, containing causal, cyclic dependencies. c○2010 S. Itani and M. OhannessianITANI OHANNESSIAN SACHS NOLAN DAHLEH 1.
Automated search for causal relations: Theory and practice
 Heuristics, Probability and Causality: A Tribute to Judea Pearl
, 2010
"... The rapid spread of interest in the last two decades in principled methods of search or estimation of causal relations has been driven in part by technological developments, especially the changing nature of modern data collection and storage techniques, and the increases in the speed and storage ca ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The rapid spread of interest in the last two decades in principled methods of search or estimation of causal relations has been driven in part by technological developments, especially the changing nature of modern data collection and storage techniques, and the increases in the speed and storage capacities of computers. Statistics books from 30 years
Causal Explorer: A Matlab Library of Algorithms for Causal Discovery and Variable Selection for Classification
"... Abstract: Causal Explorer is a Matlab library of computational causal discovery and variable selection algorithms. Causal Explorer offers a wide variety of major prototypical and stateoftheart algorithms in the field and a unified and easytolearn programming interface. Causal Explorer is designe ..."
Abstract
 Add to MetaCart
Abstract: Causal Explorer is a Matlab library of computational causal discovery and variable selection algorithms. Causal Explorer offers a wide variety of major prototypical and stateoftheart algorithms in the field and a unified and easytolearn programming interface. Causal Explorer is designed for all researchers performing data analysis with the desire to gain an understanding in the underlying causal mechanisms that generated their data. In addition to the causal discovery methods, Causal Explorer contains related variable selection techniques. The variable selection algorithms in Causal Explorer are based on theories of causal discovery and the selected variables have specific causal interpretation. The Causal Explorer code emphasizes efficiency, scalability, and quality of discovery. The implementations of previously published algorithms included in Causal Explorer are more efficient than their original implementations. A unique advantage of Causal Explorer is the inclusion of very large scale and high quality algorithms developed by the authors of this chapter. The first version of Causal Explorer was introduced several years ago to the biomedical community. The purposes of this chapter are to reintroduce the library to a broader audience, to describe new functionality of the library, and to provide information on the use of Causal Explorer in community as a whole and in the Causation and Prediction Challenge. 1.