Results 11  20
of
278
Statistical issues in the design, analysis and interpretation of animal carcinogenicity studies. Environ Health Persp 58: 385−392
, 1984
"... Statistical issues in the design, analysis and interpretation of animal carcinogenicity studies are discussed. In the area of experimental design, issues that must be considered include randomization of animals, sample size considerations, dose selection and allocation of animals to experimental gro ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Statistical issues in the design, analysis and interpretation of animal carcinogenicity studies are discussed. In the area of experimental design, issues that must be considered include randomization of animals, sample size considerations, dose selection and allocation of animals to experimental groups, and control of potentially confounding factors. In the analysis of tumor incidence data, survival differences among groups should be taken into account. It is important to try to distinguish between tumors that contribute to the death of the animal and "incidental " tumors discovered at autopsy in an animal dying of an unrelated cause. Life table analyses (appropriate for lethal tumors) and incidental tumor tests (appropriate for nonfatal tumors) are described, and the utilization of these procedures by the National Tbxicology Program is discussed. Despite the fact that past interpretations of carcinogenicity data have tended to focus on pairwise comparisons in general and highdose effects in particular, the importance of trend tests should not be overlooked, since these procedures are more sensitive than pairwise comparisons to the detection of carcinogenic effects. No rigid statistical "decision rule " should be employed in the interpretation of carcinogenicity data. Although the statistical significance of an observed tumor increase is perhaps the single most important piece of evidence used in the evaluation process, a number of biological factors must also be taken into account. The use of historical control data, the falsepositive issue and the interpretation of negative trends are also discussed.
Bayesian Variable Selection for Proportional Hazards Models
, 1996
"... The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semiparametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regr ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The authors consider the problem of Bayesian variable selection for proportional hazards regression models with right censored data. They propose a semiparametric approach in which a nonparametric prior is specified for the baseline hazard rate and a fully parametric prior is specified for the regression coe#cients. For the baseline hazard, they use a discrete gamma process prior, and for the regression coe#cients and the model space, they propose a semiautomatic parametric informative prior specification that focuses on the observables rather than the parameters. To implement the methodology, they propose a Markov chain Monte Carlo method to compute the posterior model probabilities. Examples using simulated and real data are given to demonstrate the methodology. R ESUM E Les auteurs abordent d'un point de vue bayesien le problemedelaselection de variables dans les modeles de regression des risques proportionnels en presence de censure a droite. Ils proposent une approche semip...
Bayesian information criterion for censored survival models
 Biometrics
"... We investigate the Bayesian Information Criterion (BIC) for variable selection in models for censored survival data. Kass and Wasserman (1995) showed that BIC provides a close approximation to the Bayes factor when a unitinformation prior on the parameter space is used. We propose a revision of the ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We investigate the Bayesian Information Criterion (BIC) for variable selection in models for censored survival data. Kass and Wasserman (1995) showed that BIC provides a close approximation to the Bayes factor when a unitinformation prior on the parameter space is used. We propose a revision of the penalty term in BIC so that it is de ned in terms of the number of uncensored events instead of the number of observations. For the simplest censored data model, that of exponential distributions of survival times (i.e. a constant hazard rate), this revision results in a better approximation to the exact Bayes factor based on a conjugate unitinformation prior. In the Cox proportional hazards regression model, we propose de ning BIC in terms of the maximized partial likelihood. Using the number of deaths rather than the number of individuals in the BIC penalty term corresponds to a more realistic prior on the parameter space, and is shown to improve predictive performance for assessing stroke risk in the Cardiovascular Health Study.
Joint modeling of longitudinal and timetoevent data: an overview
 Statistica Sinica
, 2004
"... A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a timetoevent. Considerable recent interest has focused on socalled joint models, where models for the event time distribution and longitudinal data are taken to depend on a ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a timetoevent. Considerable recent interest has focused on socalled joint models, where models for the event time distribution and longitudinal data are taken to depend on a common set of latent random effects. In the literature, precise statement of the underlying assumptions typically made for these models has been rare. We review the rationale for and development of joint models, offer insight into the structure of the likelihood for model parameters that clarifies the nature of common assumptions, and describe and contrast some of our recent proposals for implementation and inference.
Timevarying functional regression for predicting remaining lifetime distributions from longitudinal trajectories
 Biometrics
, 2005
"... A recurring objective in longitudinal studies on aging and longevity has been the investigation of the relationship between ageatdeath and current values of a longitudinal covariate trajectory that quantifies reproductive or other behavioral activity. We propose a novel technique for predicting ag ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
A recurring objective in longitudinal studies on aging and longevity has been the investigation of the relationship between ageatdeath and current values of a longitudinal covariate trajectory that quantifies reproductive or other behavioral activity. We propose a novel technique for predicting ageatdeath distributions for situations where an entire covariate history is included in the predictor. The predictor trajectories up to current time are represented by timevarying functional principal component scores, which are continuously updated as time progresses and are considered to be timevarying predictor variables that are entered into a class of timevarying functional regression models that we propose. We demonstrate for biodemographic data how these methods can be applied to obtain predictions for ageatdeath and estimates of remaining lifetime distributions, including estimates of quantiles and of prediction intervals for remaining lifetime. Estimates and predictions are obtained for individual subjects, based on their observed behavioral trajectories, and include a dimensionreduction step that is implemented by projecting on a single index. The proposed techniques are illustrated with data on longitudinal daily egglaying for female medflies, predicting remaining lifetime and ageatdeath distributions from individual event histories observed up to current time. 1
The Role of Frailty Models and Accelerated Failure Time Models in Describing Heterogeneity Due to Omitted Covariates
, 1997
"... INTRODUCTION Statistical modelling of heterogeneity may be based on strati#cation according to factors, regression on covariates, or by assuming a probability distribution of the interindividual variation. In survival analysis Vaupel et al. coined the phrase #frailty" in connection with a particul ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
INTRODUCTION Statistical modelling of heterogeneity may be based on strati#cation according to factors, regression on covariates, or by assuming a probability distribution of the interindividual variation. In survival analysis Vaupel et al. coined the phrase #frailty" in connection with a particular version of such a stochastic model, in which individual i was assumed to have death intensity Z i ##a# at age a, where the random variable Z i #the #frailty"# is assumed to have a gamma distribution. The assumptions that the randomness is ageindependent and that it acts multiplicatively on an underlying intensity ##a# are in principle arbitrary but have been taken as the basis for much subsequent work on random heterogeneity in survival analysis. Useful surveys are by Andersen et al. , Chapter IX, Nielsen et al. , Klein et al. , Aalen Schumacher et al. and Hougaard . The frailty models are likely to be particularly useful for modelling multivariate survival times, whethe
Rate of Convergence for Hazard Regression
 Scand. J. Statist
, 1994
"... The logarithm of the conditional hazard function of a survival time given one or more covariates is approximated by a function having the form of a specified sum of functions of at most d of the variables. Subject to this form, the approximation is chosen to maximize the expected conditional logli ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
The logarithm of the conditional hazard function of a survival time given one or more covariates is approximated by a function having the form of a specified sum of functions of at most d of the variables. Subject to this form, the approximation is chosen to maximize the expected conditional loglikelihood. Maximum likelihood and sums of tensor products of polynomial splines are used to construct an estimate of this approximation based on a random sample. The components of this estimate possess a rate of convergence that depends only on d and a suitably defined smoothness parameter. KEY WORDS: Conditional hazard function; Maximum likelihood; Tensor product splines. This research was supported in part by a grant from the Graduate School Fund of the University of Washington. y This research was supported in part by National Science Foundation Grant DMS9204247 z This research was supported in part by a Research Council Grant from the University of North Carolina. 1 1. Introducti...
Health effects of gasoline exposure. II. Mortality patterns of distribution workers
 in the United States. Environ. Health Perspect. 101 (Suppl
, 1993
"... In this study, the cohort consisted of 18,135 distribution employees with potential exposure to gasoline for at least one year at landbased terminals (n = 9,026) or on marine vessels (n = 9,109) between 1946 and 1985. The primary objective of the study was to determine the relationshipA if any, bet ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
In this study, the cohort consisted of 18,135 distribution employees with potential exposure to gasoline for at least one year at landbased terminals (n = 9,026) or on marine vessels (n = 9,109) between 1946 and 1985. The primary objective of the study was to determine the relationshipA if any, between exposure to gasoline and mortality from kidney cancer or leukemia. In addition, other causes of death of secondary interest included multiple myeloma and heart diseases. The mortality of the cohort was observed through June 30,1989. The results of this study indicated that there was no increased mortality from either kidney cancer or leukemia among marketing and marine distribution employees who were exposed to gasoline in the petroleum industry when compared to the general population. Among the landbased terminal employees, the kidney cancer standardized mortality ratio (SMR) was 65.4 (12 deaths) and leukemia SMR was 89.1 (27 deaths). For the marine cohort, the SMRs were 83.7 for kidney cancer (12 deaths) and 70.0 for leukemia (16 deaths), respectively. More importantly, based on internal comparisons, there was no association between mortality from kidney cancer or leukemia and various indices of gasoline exposure. In particular, neither duration of employment,
What’s in a picture? Evidence of discrimination from Prosper.com
, 2008
"... We analyze discrimination in a new type of credit market known as peertopeer lending. Specifically, we examine how lenders in this online market respond to signals of characteristics such as race, age, and gender that are conveyed via pictures and text. We find evidence of significant racial dispa ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We analyze discrimination in a new type of credit market known as peertopeer lending. Specifically, we examine how lenders in this online market respond to signals of characteristics such as race, age, and gender that are conveyed via pictures and text. We find evidence of significant racial disparities; loan listings with blacks in the attached picture are 25 to 35 percent less likely to receive funding than those of whites with similar credit profiles. Conditional on receiving a loan, the interest rate paid by blacks is 60 to 80 basis points higher than that paid by comparable whites. Though less significant than the effects for race, we find that the market also discriminates somewhat against the elderly and the overweight, but in favor of women and those that signal military involvement. Despite the higher average interest rates charged to blacks, lenders making such loans earn a lower net return compared to loans made to whites with similar credit profiles because blacks have higher relative default rates. This pattern of net returns is inconsistent with theories of accurate statistical discrimination (equal net returns) or costly tastebased preferences against loaning money to black borrowers (higher net returns for blacks). It is instead consistent with partial tastebased preferences by lenders in favor of blacks over whites or with systematic underestimation by lenders of relative default rates between blacks and whites.
Group Sequential Analysis Incorporating Covariate Information
, 1997
"... In this paper we survey existing results concerning the joint distribution of the sequence of estimates of the parameter vector when a model is fitted to accumulating data and we provide a unified theory which explains the "independent increments" structure commonly seen in group sequential test sta ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
In this paper we survey existing results concerning the joint distribution of the sequence of estimates of the parameter vector when a model is fitted to accumulating data and we provide a unified theory which explains the "independent increments" structure commonly seen in group sequential test statistics. Our theory covers normal linear models, including the case of correlated observations, and asymptotic results extend to generalized linear models and the proportional hazards regression model for survival data. The asymptotic results are derived using standard methods for the nonsequential case and they hold as long as these nonsequential techniques are applicable at each individual analysis. In all cases, the joint distribution of the sequence of parameter estimates has the same form, exactly or asymptotically, as that of the sequence of means of an increasing number of independent, identically distributed normal variables. Thus, our results provide the formal basis for extending...