Results 1  10
of
10
Simulating Normalized Constants: From Importance Sampling to Bridge Sampling to Path Sampling
 Statistical Science, 13, 163–185. COMPARISON OF METHODS FOR COMPUTING BAYES FACTORS 435
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 146 (4 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Replication and meta–analysis in parapsychology (with discussion
 Statistical Science
, 1991
"... Abstract. Parapsychology, the laboratory study of psychic phenomena, has had its history interwoven with that of statistics. Many of the controversies in parapsychology have focused on statistical issues, and statistical models have played an integral role in the experimental work. Recently, parapsy ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Abstract. Parapsychology, the laboratory study of psychic phenomena, has had its history interwoven with that of statistics. Many of the controversies in parapsychology have focused on statistical issues, and statistical models have played an integral role in the experimental work. Recently, parapsychologists have been using metaanalysis as a tool for synthesizing large bodies of work. This paper presents an overview of the use of statistics in parapsychology and offers a summary of the metaanalyses that have been conducted. It begins with some anecdotal information about the involvement of statistics and statisticians with the early history of parapsychology. Next, it is argued that most nonstatisticians do not appreciate the connection between power and "successful " replication of experimental effects. Returning to parapsychology, a particular experimental regime is examined by summarizing an extended debate over the interpretation of the results. A new set of experiments designed to resolve the debate is then reviewed. Finally,
Statistical issues in the design, analysis and interpretation of animal carcinogenicity studies. Environ Health Persp 58: 385−392
, 1984
"... Statistical issues in the design, analysis and interpretation of animal carcinogenicity studies are discussed. In the area of experimental design, issues that must be considered include randomization of animals, sample size considerations, dose selection and allocation of animals to experimental gro ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Statistical issues in the design, analysis and interpretation of animal carcinogenicity studies are discussed. In the area of experimental design, issues that must be considered include randomization of animals, sample size considerations, dose selection and allocation of animals to experimental groups, and control of potentially confounding factors. In the analysis of tumor incidence data, survival differences among groups should be taken into account. It is important to try to distinguish between tumors that contribute to the death of the animal and "incidental " tumors discovered at autopsy in an animal dying of an unrelated cause. Life table analyses (appropriate for lethal tumors) and incidental tumor tests (appropriate for nonfatal tumors) are described, and the utilization of these procedures by the National Tbxicology Program is discussed. Despite the fact that past interpretations of carcinogenicity data have tended to focus on pairwise comparisons in general and highdose effects in particular, the importance of trend tests should not be overlooked, since these procedures are more sensitive than pairwise comparisons to the detection of carcinogenic effects. No rigid statistical "decision rule " should be employed in the interpretation of carcinogenicity data. Although the statistical significance of an observed tumor increase is perhaps the single most important piece of evidence used in the evaluation process, a number of biological factors must also be taken into account. The use of historical control data, the falsepositive issue and the interpretation of negative trends are also discussed.
Dual controls, pvalue plots, and the multiple testing issue incarcinogenicity studies. Environ. Health Perspect
, 1989
"... The interpretation of statistically significant findings in a carcinogenicity study is difficult, in part because of the large number of statistical tests conducted. Some scientists who believe that the false positive rates in these experiments are unreasonably large often suggest that the use of mu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The interpretation of statistically significant findings in a carcinogenicity study is difficult, in part because of the large number of statistical tests conducted. Some scientists who believe that the false positive rates in these experiments are unreasonably large often suggest that the use of multiple control groups will provide important insight into the operational false positive rates. The purpose of this paper is 2fold: to present results from two carcinogenicity studies with dual control groups, and to present and illustrate a new graphical technique potentially useful in the analysis and interpretation of tumor data from carcinogenicity studies. The experimental data analyzed show that statistically significant differences between identically treated groups will occur with regular frequency. Such data, however, do not provide strong evidence of extrabinomial variation in tumor rates. The pvalue plot is advocated as a graphical method that can be used to assess visually the ensemble of p values for neoplasm data from an entire study. This technique is then illustrated using several examples. Through computer simulation, we present pvalue plots generated with and without treatment effects present. On average, the plots look substantially different depending on the presence or absence of an effect. We also evaluate decision rules motivated by the pvalue plots. Such rules appear to have good power to detect treatment effects (i.e., have low false negative rates) while still controlling false positive rates.
Applications of Binary Segmentation to the Estimation of Quantal Response Curves and Spatial Intensity
, 2004
"... This paper explores the use of binary segmentation procedures in two applications. The first application is concerned with the estimation of nonparametric quantal response curves. With Bernoulli data and an assumed monotone increasing curve, this gives rise a changepoint model where the change poin ..."
Abstract
 Add to MetaCart
This paper explores the use of binary segmentation procedures in two applications. The first application is concerned with the estimation of nonparametric quantal response curves. With Bernoulli data and an assumed monotone increasing curve, this gives rise a changepoint model where the change points are determined using a sequence of nested hypothesis tests of whether a change point exists. The second application concerns cluster identification and inference for spatial data where the shape of the clusters and the number of clusters is unknown. The procedure involves a sequence of nested hypothesis tests of a single cluster versus a pair of distinct clusters. Examples of both applications are provided. Key words: Akaike information criterion; Bioassay; Circular growth clusters. 1
NORMALIZING CONSTANTS
"... Abstract. Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the atten ..."
Abstract
 Add to MetaCart
Abstract. Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the attention of general statistical audiences of some effective methods originating from theoretical physics and at the same time to explore these methods from a more statistical perspective, through establishing theoretical connections and illustrating their uses with statistical problems. We show that the acceptance ratio method and thermodynamic integration are natural generalizations of importance sampling, which is most familiar to statistical audiences. The former generalizes importance sampling through the use of a single “bridge ” density and is thus a case of bridge sampling in the sense of Meng and Wong. Thermodynamic integration, which is also known in the numerical analysis literature as Ogata’s method for highdimensional integration, corresponds to the use of infinitely many and continuously connected bridges (and thus a “path”). Our path sampling formulation offers more flexibility and thus potential efficiency to thermodynamic integration, and the search of optimal paths turns out to have close connections with the Jeffreys prior density and the Rao and Hellinger distances between two densities. We provide an informative theoretical example as well as two empirical examples (involving 17 to 70dimensional integrations) to illustrate the potential and implementation of path sampling. We also discuss some open problems.
Institute of Statistics Mimeo Series No. 1471 November 1984INCORPORATING HISTORICAL CONTROL INFORMATION IN BIOASSAY TESTING ACCOUNTING FOR SURVIVAL DIFFERENCES
, 1984
"... ea r I ..."
Use of Historical Controls for Animal Experiments
"... Statistical methods for the use of historical control data in testing for a trend in proportions in carcinogenicity rodent bioassays are reviewed. Asymptotic properties of the HoelYanagawa exact conditional tests are developed and compared with the Tarone test. It is indicated that the HoelYanagaw ..."
Abstract
 Add to MetaCart
Statistical methods for the use of historical control data in testing for a trend in proportions in carcinogenicity rodent bioassays are reviewed. Asymptotic properties of the HoelYanagawa exact conditional tests are developed and compared with the Tarone test. It is indicated that the HoelYanagawa test is more powerful than the Tarone test. These tests depend on the betabinomial parameters which are estimated from historical data. The goodness of fit of betabinomial distributions to historical data is illustrated by application to the historical control database in the National Toxicology Program. Finally, sensitivities of the exact conditional test to the historical information is discussed and a conservative use of the test is considered.
Chemical Carcinogens: A Review of the Science and Its Associated Principles By the U.S. Interagency Staff Group on Carcinogens*
"... In order to articulate a view of chemical carcinogenesis that scientists generally hold in common today and to draw upon this understanding to compose guiding principles that can be used as a bases for the efforts of the regulatory agencies to establish guidelines for assessing carcinogenic risk to ..."
Abstract
 Add to MetaCart
In order to articulate a view of chemical carcinogenesis that scientists generally hold in common today and to draw upon this understanding to compose guiding principles that can be used as a bases for the efforts of the regulatory agencies to establish guidelines for assessing carcinogenic risk to meet the specific requirements of the legislative acts they are charged to implement, the Office of Science and Technology Policy, Executive Office, the White House drew on the expertise of a number of regulatory agencies to elucidate present scientific views in critical areas of the major disciplines important to the process of risk assessment. The document is composed of two major sections, Principles and the StateoftheScience. The latter consists of subsections on the mechanisms of carcinogenesis, shortterm and longterm testing, and epidemiology, which are important components in the risk assessment step of hazard identification. These subsections are followed by one on exposure assessment, and a final section which includes analyses of doseresponse (hazard) assessment and risk characterization. The principles are derived from considerations in each of the subsections. Because of present gaps in understanding, the principles contain judgmental (science policy) decisions on major unresolved issues as well as statements of what is generally accepted as fact. These judgments are basically assumptions which are responsible for much of the uncertainty in the process of risk assessment. There was an attempt to clearly distinguish policy and fact. The subsections of the StateoftheScience portion provide the underlying support to the principles articulated, and to read the "Principles " section without a full appreciation of the StateoftheScience section is to invite oversimplification and misinterpretation. Finally, suggestions are made for future research efforts which will improve the process of risk assessment.
Joint Statistical Meetings Biometrics Sectionto include ENAR & WNAR CRITICAL VALUES AND POWER FOR A TEST OF GROUP DIFFERENCES IN RODENT CARCINOGENICITY BIOASSAYS BASED ON THE BETABINOMIAL DISTRIBUTION WITH
"... historical controls; likelihood ratio test; overdispersion; rodent carcinogenicity bioassay; significance level; two control groups ..."
Abstract
 Add to MetaCart
historical controls; likelihood ratio test; overdispersion; rodent carcinogenicity bioassay; significance level; two control groups