Results 1  10
of
65
Bounds on Treatment Effects from Studies with Imperfect Compliance
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1997
"... This paper establishes nonparametric formulas that can be used to bound the average treatment effect in experimental studies in which treatment assignment is random but subject compliance is imperfect. The bounds provided are the tightest possible, given the distribution of assignments, treatment ..."
Abstract

Cited by 109 (16 self)
 Add to MetaCart
This paper establishes nonparametric formulas that can be used to bound the average treatment effect in experimental studies in which treatment assignment is random but subject compliance is imperfect. The bounds provided are the tightest possible, given the distribution of assignments, treatments, and responses. The formulas show that even with high rates of noncompliance, experimental data can yield useful and sometimes accurate information on the average e#ect of a treatment on the population.
Causal inference with general treatment regimes: Generalizing the propensity score
 Journal of the American Statistical Association
, 2004
"... In this article we develop the theoretical properties of the propensity function, which is a generalization of the propensity score of Rosenbaum and Rubin. Methods based on the propensity score have long been used for causal inference in observational studies; they are easy to use and can effectivel ..."
Abstract

Cited by 87 (9 self)
 Add to MetaCart
(Show Context)
In this article we develop the theoretical properties of the propensity function, which is a generalization of the propensity score of Rosenbaum and Rubin. Methods based on the propensity score have long been used for causal inference in observational studies; they are easy to use and can effectively reduce the bias caused by nonrandom treatment assignment. Although treatment regimes need not be binary in practice, the propensity score methods are generally confined to binary treatment scenarios. Two possible exceptions have been suggested for ordinal and categorical treatments. In this article we develop theory and methods that encompass all of these techniques and widen their applicability by allowing for arbitrary treatment regimes. We illustrate our propensity function methods by applying them to two datasets; we estimate the effect of smoking on medical expenditure and the effect of schooling on wages. We also conduct simulation studies to investigate the performance of our methods.
Causal Inference from Graphical Models
, 2001
"... Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling ..."
Abstract

Cited by 78 (6 self)
 Add to MetaCart
Introduction The introduction of Bayesian networks (Pearl 1986b) and associated local computation algorithms (Lauritzen and Spiegelhalter 1988, Shenoy and Shafer 1990, Jensen, Lauritzen and Olesen 1990) has initiated a renewed interest for understanding causal concepts in connection with modelling complex stochastic systems. It has become clear that graphical models, in particular those based upon directed acyclic graphs, have natural causal interpretations and thus form a base for a language in which causal concepts can be discussed and analysed in precise terms. As a consequence there has been an explosion of writings, not primarily within mainstream statistical literature, concerned with the exploitation of this language to clarify and extend causal concepts. Among these we mention in particular books by Spirtes, Glymour and Scheines (1993), Shafer (1996), and Pearl (2000) as well as the collection of papers in Glymour and Cooper (1999). Very briefly, but fundamentally,
Counterfactual Probabilities: Computational Methods, Bounds and Applications
 UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1994
"... Evaluation of counterfactual queries (e.g., "If A were true, would C have been true?") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and P ..."
Abstract

Cited by 63 (23 self)
 Add to MetaCart
Evaluation of counterfactual queries (e.g., "If A were true, would C have been true?") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994], where the antecedent of the query is interpreted as an external action that forces the proposition A to be true. When a prior probability is available on the causal mechanisms governing the domain, counterfactual probabilities can be evaluated precisely. However, when causal knowledge is specified as conditional probabilities on the observables, only bounds can computed. This paper develops techniques for evaluating these bounds, and demonstrates their use in two applications: (1) the determination of treatment efficacy from studies in which subjects may choose their own treatment, and (2) the determination of liability in productsafety litigation.
Semiparametric Bayes analysis of longitudinal data treatment models
 Journal of Econometrics
, 2002
"... data treatment models ..."
(Show Context)
The Problem of Regions
, 1998
"... In the problem of regions we wish to know which one of a discrete set of possibilities applies to a continuous parameter vector. This problem arises in the following way: we compute a descriptive statistic from a set of data and notice an interesting feature. We wish to assign a confidence level to ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
In the problem of regions we wish to know which one of a discrete set of possibilities applies to a continuous parameter vector. This problem arises in the following way: we compute a descriptive statistic from a set of data and notice an interesting feature. We wish to assign a confidence level to that feature. For example, we compute a density estimate and notice that the estimate is bimodal. What confidence do we assign to bimodality ? A natural way to measure this confidence is via the bootstrap: we compute our descriptive statistic on a large number of bootstrap samples and record the proportion of times that the feature appears. This proportion seems like a plausible measure of confidence for the feature. We study the construction of such confidence values and examine to what extent they approximate frequentist pvalues. We derive more accurate confidence values using both frequentist and objective Bayesian approaches. The methods are illustrated with a number of examples includ...
Statistical power in randomized intervention studies with noncompliance
 Psychological Methods
, 2002
"... This study examined various factors that affect statistical power in randomized intervention studies with noncompliance. On the basis of Monte Carlo simulations, this study demonstrates how statistical power changes depending on compliance rate, study design, outcome distributions, and covariate inf ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
(Show Context)
This study examined various factors that affect statistical power in randomized intervention studies with noncompliance. On the basis of Monte Carlo simulations, this study demonstrates how statistical power changes depending on compliance rate, study design, outcome distributions, and covariate information. It also examines how these factors influence power in different methods of estimating intervention effects. Intenttotreat analysis and complier average causal effect estimation are compared as 2 alternative ways of estimating intervention effects under noncompliance. The results of this investigation provide practical implications in designing and evaluating intervention studies taking into account noncompliance. In randomized field experiments, noncompliance (nonadherence) can be a major threat to obtaining statistical power to detect intervention effects. Noncompliance occurs when study participants do not follow the randomized assignment, and it has several forms (Angrist, Imbens, & Rubin, 1996). The most
Causal Inference from Indirect Experiments
, 1995
"... Indirect experiments are studies in which randomized control is replaced by randomized encouragement, that is, subjects are encouraged, rather than forced to receive treatment programs. The purpose of this paper is to bring to the attention of experimental researchers simple mathematical results tha ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Indirect experiments are studies in which randomized control is replaced by randomized encouragement, that is, subjects are encouraged, rather than forced to receive treatment programs. The purpose of this paper is to bring to the attention of experimental researchers simple mathematical results that enable us to assess, from indirect experiments, the strength with which causal influences operate among variables of interest. The results reveal that despite the laxity of the encouraging instrument, indirect experimentation can yield significant and sometimes accurate information on the impact of a program on the population as a whole, as well as on the particular individuals who participated in the program. Keywords: Causal reasoning, treatment evaluation, noncompliance, graphical models 1 Introduction Standard experimental studies in the biological, medical, and behavioral sciences invariably invoke the instrument of randomized control, that is, subjects are assigned at random to va...
A Clinician's Tool for Analyzing Noncompliance
, 1996
"... We describe a computer program to assist a clinician with assessing the efficacy of treatments in experimental studies for which treatment assignment is random but subject compliance is imperfect. The major difficulty in such studies is that treatment efficacy is not "identifiable", th ..."
Abstract

Cited by 24 (13 self)
 Add to MetaCart
We describe a computer program to assist a clinician with assessing the efficacy of treatments in experimental studies for which treatment assignment is random but subject compliance is imperfect. The major difficulty in such studies is that treatment efficacy is not "identifiable", that is, it cannot be estimated from the data, even when the number of subjects is infinite, unless additional knowledge is provided. Our system combines Bayesian learning with Gibbs sampling using two inputs: (1) the investigator's prior probabilities of the relative sizes of subpopulations and (2) the observed data from the experiment. The system outputs a histogram depicting the posterior distribution of the average treatment effect, that is, the probability that the average outcome (e.g., survival) would attain a given level, had the treatment been taken uniformly by the entire population. This paper describes the theoretical basis for the proposed approach and presents experimental results on ...
Statistical considerations in the intenttotreat principle
 Controlled Clinical Trials
, 2000
"... ABSTRACT: This paper describes some of the statistical considerations in the intenttotreat design and analysis of clinical trials. The pivotal property of a clinical trial is the assignment of treatments to patients at random. Randomization alone, however, is not sufficient to provide an unbiased ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
ABSTRACT: This paper describes some of the statistical considerations in the intenttotreat design and analysis of clinical trials. The pivotal property of a clinical trial is the assignment of treatments to patients at random. Randomization alone, however, is not sufficient to provide an unbiased comparison of therapies. An additional requirement is that the set of patients contributing to an analysis provides an unbiased assessment of treatment effects, or that any missing data are ignorable. A sufficient condition to provide an unbiased comparison is to obtain complete data on all randomized subjects. This can be achieved by an intenttotreat design wherein all patients are followed until death or the end of the trial, or until the outcome event is reached in a timetoevent trial, irrespective of whether the patient is still receiving or complying with the assigned treatment. The properties of this strategy are contrasted with those of an efficacy subset analysis in which patients and observable patient data are excluded from the analysis on the basis of information obtained postrandomization. I describe the potential bias that can be introduced by such postrandomization exclusions and the pursuant effects on type