Results 1  10
of
25
Causal Inference from Graphical Models
, 2001
"... Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling complex stochastic systems. It has become clear that graphical models, in particular those based upon directed acyclic graphs, have natural causal interpretations and thus form a base for a language in which causal concepts can be discussed and analysed in precise terms. As a consequence there has been an explosion of writings, not primarily within mainstream statistical literature, concerned with the exploitation of this language to clarify and extend causal concepts. Among these we mention in particular books by Spirtes, Glymour and Scheines (1993), Shafer (1996), and Pearl (2000) as well as the collection of papers in Glymour and Cooper (1999). Very briefly, but fundamentally,
Bounds on Treatment Effects from Studies with Imperfect Compliance
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1997
"... This paper establishes nonparametric formulas that can be used to bound the average treatment effect in experimental studies in which treatment assignment is random but subject compliance is imperfect. The bounds provided are the tightest possible, given the distribution of assignments, treatment ..."
Abstract

Cited by 56 (13 self)
 Add to MetaCart
This paper establishes nonparametric formulas that can be used to bound the average treatment effect in experimental studies in which treatment assignment is random but subject compliance is imperfect. The bounds provided are the tightest possible, given the distribution of assignments, treatments, and responses. The formulas show that even with high rates of noncompliance, experimental data can yield useful and sometimes accurate information on the average e#ect of a treatment on the population.
Counterfactual Probabilities: Computational Methods, Bounds and Applications.
 Uncertainty in Artificial Intelligence 10
, 1994
"... Evaluation of counterfactual queries (e.g., "If A were true, would C have been true?") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994 ..."
Abstract

Cited by 51 (19 self)
 Add to MetaCart
Evaluation of counterfactual queries (e.g., "If A were true, would C have been true?") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994], where the antecedent of the query is interpreted as an external action that forces the proposition A to be true. When a prior probability is available on the causal mechanisms governing the domain, counterfactual probabilities can be evaluated precisely. However, when causal knowledge is specified as conditional probabilities on the observables, only bounds can computed. This paper develops techniques for evaluating these bounds, and demonstrates their use in two applications: (1) the determination of treatment efficacy from studies in which subjects may choose their own treatment, and (2) the determination of liability in productsafety litigation. 1 INTRODUCTION A counterfactual sente...
Causal inference with general treatment regimes: Generalizing the propensity score
 Journal of the American Statistical Association
, 2004
"... In this article we develop the theoretical properties of the propensity function, which is a generalization of the propensity score of Rosenbaum and Rubin. Methods based on the propensity score have long been used for causal inference in observational studies; they are easy to use and can effectivel ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
In this article we develop the theoretical properties of the propensity function, which is a generalization of the propensity score of Rosenbaum and Rubin. Methods based on the propensity score have long been used for causal inference in observational studies; they are easy to use and can effectively reduce the bias caused by nonrandom treatment assignment. Although treatment regimes need not be binary in practice, the propensity score methods are generally confined to binary treatment scenarios. Two possible exceptions have been suggested for ordinal and categorical treatments. In this article we develop theory and methods that encompass all of these techniques and widen their applicability by allowing for arbitrary treatment regimes. We illustrate our propensity function methods by applying them to two datasets; we estimate the effect of smoking on medical expenditure and the effect of schooling on wages. We also conduct simulation studies to investigate the performance of our methods.
The Problem of Regions
, 1998
"... In the problem of regions we wish to know which one of a discrete set of possibilities applies to a continuous parameter vector. This problem arises in the following way: we compute a descriptive statistic from a set of data and notice an interesting feature. We wish to assign a confidence level to ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
In the problem of regions we wish to know which one of a discrete set of possibilities applies to a continuous parameter vector. This problem arises in the following way: we compute a descriptive statistic from a set of data and notice an interesting feature. We wish to assign a confidence level to that feature. For example, we compute a density estimate and notice that the estimate is bimodal. What confidence do we assign to bimodality ? A natural way to measure this confidence is via the bootstrap: we compute our descriptive statistic on a large number of bootstrap samples and record the proportion of times that the feature appears. This proportion seems like a plausible measure of confidence for the feature. We study the construction of such confidence values and examine to what extent they approximate frequentist pvalues. We derive more accurate confidence values using both frequentist and objective Bayesian approaches. The methods are illustrated with a number of examples includ...
A Clinician's Tool for Analyzing Noncompliance
, 1996
"... We describe a computer program to assist a clinician with assessing the efficacy of treatments in experimental studies for which treatment assignment is random but subject compliance is imperfect. The major difficulty in such studies is that treatment efficacy is not "identifiable", that is, it ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
We describe a computer program to assist a clinician with assessing the efficacy of treatments in experimental studies for which treatment assignment is random but subject compliance is imperfect. The major difficulty in such studies is that treatment efficacy is not "identifiable", that is, it cannot be estimated from the data, even when the number of subjects is infinite, unless additional knowledge is provided. Our system combines Bayesian learning with Gibbs sampling using two inputs: (1) the investigator's prior probabilities of the relative sizes of subpopulations and (2) the observed data from the experiment. The system outputs a histogram depicting the posterior distribution of the average treatment effect, that is, the probability that the average outcome (e.g., survival) would attain a given level, had the treatment been taken uniformly by the entire population. This paper describes the theoretical basis for the proposed approach and presents experimental results on ...
Causal Inference from Indirect Experiments
, 1995
"... Indirect experiments are studies in which randomized control is replaced by randomized encouragement, that is, subjects are encouraged, rather than forced to receive treatment programs. The purpose of this paper is to bring to the attention of experimental researchers simple mathematical results tha ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Indirect experiments are studies in which randomized control is replaced by randomized encouragement, that is, subjects are encouraged, rather than forced to receive treatment programs. The purpose of this paper is to bring to the attention of experimental researchers simple mathematical results that enable us to assess, from indirect experiments, the strength with which causal influences operate among variables of interest. The results reveal that despite the laxity of the encouraging instrument, indirect experimentation can yield significant and sometimes accurate information on the impact of a program on the population as a whole, as well as on the particular individuals who participated in the program. Keywords: Causal reasoning, treatment evaluation, noncompliance, graphical models 1 Introduction Standard experimental studies in the biological, medical, and behavioral sciences invariably invoke the instrument of randomized control, that is, subjects are assigned at random to va...
Nonparametric Bounds on Causal Effects from Partial Compliance Data
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1993
"... Experimental studies in which treatment assignment is random but subject compliance is imperfect may be susceptible to bias; the actual effect of the treatment may deviate appreciably from the mean difference between treated and untreated subjects. This paper establishes universal formulas that can ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
Experimental studies in which treatment assignment is random but subject compliance is imperfect may be susceptible to bias; the actual effect of the treatment may deviate appreciably from the mean difference between treated and untreated subjects. This paper establishes universal formulas that can be used to bound the actual treatment effect in any experiment for which compliance data is available and in which the assignment influences the response only through the treatment given. Using a linear programming analysis, we present formulas that provide the tightest bounds that can be inferred on the average treatment effect, given an empirical distribution of assignments, treatments, and responses. The application of these results is demonstrated on data that relates cholesterol levels to cholestyramine treatment ([Lipid Research Clinic Program 84]).
Aspects Of Graphical Models Connected With Causality
, 1993
"... This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphica ..."
Abstract

Cited by 13 (10 self)
 Add to MetaCart
This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from nonexperimental data...
private information, and the economic evaluation of randomized experiments,”Journal of Political Economy
"... Randomized experiments (REs) are viewed as the “gold standard ” for the treatment evaluation, but many REs are plagued by attrition or noncompliance, even among subjects receiving the more effective treatment. This paper constructs an economic model of decisionmaking in which individuals make util ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Randomized experiments (REs) are viewed as the “gold standard ” for the treatment evaluation, but many REs are plagued by attrition or noncompliance, even among subjects receiving the more effective treatment. This paper constructs an economic model of decisionmaking in which individuals make utility maximizing choices that provides a rich framework for evaluating REs. We estimate the subject’s utility associated with the receipt of alternative treatments, as revealed by dropout or compliance behavior, to evaluate treatment effectiveness. Utility is a function of both the “publicly observed ” outcomes that are typically the focus of evaluation studies, and treatment side effects that are the private information of the subject. Participants enter the RE uncertain of treatment effectiveness and often the treatment received, and update their prior beliefs over the course of the experiment when deciding whether to drop out. We use the framework to analyze an influential AIDS clinical trial, ACTG 175, which has been used to tout the benefits of combination therapies for AIDS over the use of AZT alone. However, our analysis indicates that for many subjects, AZT yields the highest level of utility, despite having the smallest impact on the publicly observed outcome of the study, the patient’s CD4 count. Significant and rapid learning is observed over the course of the