Results 1 
7 of
7
A Minimum Relative Entropy Principle for Learning and Acting
 J. Artif. Intell. Res. 2010
"... This paper proposes a method to construct an adaptive agent that is universal with respect to a given class of experts, where each expert is designed specifically for a particular environment. This adaptive control problem is formalized as the problem of minimizing the relative entropy of the adapti ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper proposes a method to construct an adaptive agent that is universal with respect to a given class of experts, where each expert is designed specifically for a particular environment. This adaptive control problem is formalized as the problem of minimizing the relative entropy of the adaptive agent from the expert that is most suitable for the unknown environment. If the agent is a passive observer, then the optimal solution is the wellknown Bayesian predictor. However, if the agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert. 1.
Graphical models for inference under outcomedependent sampling
 STAT SCI 2010;25:368–87
, 2010
"... We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in casecontrol studies. Graphical models represent assumptions about the conditional independencies among the variables. By including a no ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider situations where data have been collected such that the sampling depends on the outcome of interest and possibly further covariates, as for instance in casecontrol studies. Graphical models represent assumptions about the conditional independencies among the variables. By including a node for the sampling indicator, assumptions about sampling processes can be made explicit. We demonstrate how to read off such graphs whether consistent estimation of the association between exposure and outcome is possible. Moreover, we give sufficient graphical conditions for testing and estimating the causal effect of exposure on outcome. The practical use is illustrated with a number of examples.
Linking Granger Causality and the Pearl Causal Model with Settable Systems
"... Editor: The causal notions embodied in the concept of Granger causality have been argued to belong to a different category than those of Judea Pearl’s Causal Model, and so far their relation has remained obscure. Here, we demonstrate that these concepts are in fact closely linked by showing how each ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Editor: The causal notions embodied in the concept of Granger causality have been argued to belong to a different category than those of Judea Pearl’s Causal Model, and so far their relation has remained obscure. Here, we demonstrate that these concepts are in fact closely linked by showing how each relates to straightforward notions of direct causality embodied in settable systems, an extension and refinement of the Pearl Causal Model designed to accommodate optimization, equilibrium, and learning. We then provide straightforward practical methods to test for direct causality using tests for Granger causality.
Causal learning without DAGs
"... Causal learning methods are often evaluated in terms of their ability to discover a true underlying directed acyclic graph (DAG) structure. However, in general the true structure is unknown and may not be a DAG structure. We therefore consider evaluating causal learning methods in terms of predictin ..."
Abstract
 Add to MetaCart
Causal learning methods are often evaluated in terms of their ability to discover a true underlying directed acyclic graph (DAG) structure. However, in general the true structure is unknown and may not be a DAG structure. We therefore consider evaluating causal learning methods in terms of predicting the effects of interventions on unseen test data. Given this task, we show that there exist a variety of approaches to modeling causality, generalizing DAGbased methods. Our experiments on synthetic and biological data indicate that some nonDAG models perform as well or better than DAGbased methods at causal prediction tasks.
JMLR Workshop and Conference Proceedings 6:177–190 NIPS 2008 workshop on causality Causal learning without DAGs
"... Causal learning methods are often evaluated in terms of their ability to discover a true underlying directed acyclic graph (DAG) structure. However, in general the true structure is unknown and may not be a DAG structure. We therefore consider evaluating causal learning methods in terms of predictin ..."
Abstract
 Add to MetaCart
Causal learning methods are often evaluated in terms of their ability to discover a true underlying directed acyclic graph (DAG) structure. However, in general the true structure is unknown and may not be a DAG structure. We therefore consider evaluating causal learning methods in terms of predicting the effects of interventions on unseen test data. Given this task, we show that there exist a variety of approaches to modeling causality, generalizing DAGbased methods. Our experiments on synthetic and biological data indicate that some nonDAG models perform as well or better than DAGbased methods at causal prediction tasks.
Logic, Reasoning under Uncertainty and Causality
, 2010
"... A simple framework for reasoning under uncertainty and intervention is introduced. This is achieved in three steps. First, logic is restated in settheoretic terms to obtain a framework for reasoning under certainty. Second, this framework is extended to model reasoning under uncertainty. Finally, c ..."
Abstract
 Add to MetaCart
A simple framework for reasoning under uncertainty and intervention is introduced. This is achieved in three steps. First, logic is restated in settheoretic terms to obtain a framework for reasoning under certainty. Second, this framework is extended to model reasoning under uncertainty. Finally, causal spaces are introduced and shown how they provide enough information to model knowledge containing causal information about the world. 1 Bayesian Probability Theory It is advantageous to endow plausibilities with an explanatory framework that has a logically intuitive appeal. Such a framework is Bayesian probability theory. Simply put, Bayesian probability theory is a framework that extends logic for reasoning under uncertainty. 1.1 Reasoning under Certainty Logic is the most important framework of reasoning (under certainty). Here, it is rephrased in settheoretic terms 1. As will be seen, this facilitates its extension to a framework for reasoning under uncertainty. Let Ω be a set of outcomes, which is assumed to be finite for simplicity. A subset A ⊂ Ω is an event. Let c, ∪ and ∩ be the setoperations of complement, union and intersection respectively. Let F be an algebra, i.e. a set of events obeying the axioms
Principal Stratum Analysis
, 1391
"... We extend Pearl's criticisms of principal stratification analysis as a method for interpreting and adjusting for intermediate variables in a causal analysis. We argue that this can be meaningful only in those rare cases that involve strong functional dependence, and even then may not be appropriate. ..."
Abstract
 Add to MetaCart
We extend Pearl's criticisms of principal stratification analysis as a method for interpreting and adjusting for intermediate variables in a causal analysis. We argue that this can be meaningful only in those rare cases that involve strong functional dependence, and even then may not be appropriate.