Results 1  10
of
26
Discretetime survival mixture analysisforsingle and recurrent events using latent variables. Unpublished doctoral dissertation
, 2003
"... This article proposes a general latent variable approach to discretetime survival analysis of nonrepeatable events such as onset of drug use. It is showvn how the survival analysis can beformulated as a generalized latent class analysis of event history indicators. The latent class analysis can use ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
This article proposes a general latent variable approach to discretetime survival analysis of nonrepeatable events such as onset of drug use. It is showvn how the survival analysis can beformulated as a generalized latent class analysis of event history indicators. The latent class analysis can use covariates and can be combined wvith the joint modeling of other outcomes such as repeated measures for a related process. It is shown that conventional discretetime survival analysis corresponds to a singleclass latent class analysis. Multipleclass extensions are proposed, including the special cases of a class of longterm survivors and classes defined by outcomes related to survivaL The estimation uses a general latent variable framework, including both categorical and continuous latent variables and incorporated in the Mplus program. Estimation is carried out using maximum likelihood via the EM algorithm. Two examples serve as illustrations. The first example concerns recidivism after incarceration in a randomized field experiment. The second example concerns school removal related to the development of aggressive behavior in the classroom.
A generalized F mixture model for cure rate estimation
 Statistics in Medicine
, 1998
"... Cure rate estimation is an important issue in clinical trials for diseases such as lymphoma and breast cancer and mixture models are the main statistical methods. In the last decade, mixture models under different distributions, such as exponential, Weibull, lognormal and Gompertz, have been discus ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Cure rate estimation is an important issue in clinical trials for diseases such as lymphoma and breast cancer and mixture models are the main statistical methods. In the last decade, mixture models under different distributions, such as exponential, Weibull, lognormal and Gompertz, have been discussed and used. However, these models involve stronger distributional assumptions than is desirable and inferences may not be robust to departures from these assumptions. In this paper, a mixture model is proposed using the generalized F distribution family. Although this family is seldom used because of computational difficulties, it has the advantage of being very flexible and including many commonly used distributions as special cases. The generalised F mixture model can relax the usual stronger distributional assumptions and allow the analyst to uncover structure in the data that might otherwise have been missed. This is illustrated by fitting the model to data from largescale clinical trials with long followup of lymphoma patients. Computational problems with the model and model selection methods are discussed. Comparison of maximum likelihood estimates with those obtained from mixture models under other distributions are included. � 1998 John Wiley & Sons, Ltd. 1.
A TEST OF SEVERAL PARAMETRIC STATISTICAL MODELS FOR ESTIMATING SUCCESS RATE IN THE TREATMENT OF CARCINOMA CERVIX UTERI
, 1975
"... Summary.The parametric statistical models discussed include all those which have previously been described in the literature (Boag, 1948lognormal; Berkson and Gage, 1952negative exponential; Haybittle, 1959extrapolated actuarial) and the basic data used to test the models comprised some 3000 cas ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Summary.The parametric statistical models discussed include all those which have previously been described in the literature (Boag, 1948lognormal; Berkson and Gage, 1952negative exponential; Haybittle, 1959extrapolated actuarial) and the basic data used to test the models comprised some 3000 case histories of patients treated between 1945 and 1962. The histories were followed up during the period 196971 and thus provided adequate information to validate longterm survival fractions predicted using shortterm followup data. The results with the lognormal model showed that for series of staged carcinoma cervix patients treated during a 5year period, satisfactory estimates of longterm survival fractions could be predicted after a minimum waiting period of 3 years for stages I and II, and 2 years for stage III. The model should be used with a value assumed for the lognormal paramater S in the range S = 035 to S = 0 40. Although alternative models often gave adequate predictions, the lognormal proved to be the most consistent model. This model may therefore now be used with more confidence for prospective studies on carcinoma cervix series and can provide good estimates of longterm
Mixture Models in Econometric Duration Analysis
, 2002
"... Econometric duration analysis has become an important part of methodology in econometrics, bringing forth a plenty of applications. The probability distribution of the duration of a time span is modeled through its conditional hazard rate given the covariates. When some of the covariates are unobser ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Econometric duration analysis has become an important part of methodology in econometrics, bringing forth a plenty of applications. The probability distribution of the duration of a time span is modeled through its conditional hazard rate given the covariates. When some of the covariates are unobservable, the duration, given the observable covariates, has a mixture distribution. The paper surveys and discusses...
Statistical analysis of the bioassay of continuous carcinogens
 Brit. J. Cancer
, 1972
"... Summary.In an experiment consisting of the continuous constant application of various carcinogenic regimens to a pure strain of experimental animals for a long period, the cancer incidence rates so caused may be studied and compared by the fit of an appropriate class of statistical distributions. I ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Summary.In an experiment consisting of the continuous constant application of various carcinogenic regimens to a pure strain of experimental animals for a long period, the cancer incidence rates so caused may be studied and compared by the fit of an appropriate class of statistical distributions. In this paper we show that a Weibull distribution in which the agespecific cancer incidence rate rises as a power of time since first risk is more appropriate than a lognormal distribution. If the Weibull family of distributions is used, more information can be extracted from the data, and differences of toxicity between various regimens will not bias the comparison of their carcinogenic forces. CARCINOGENESIS produced by continued application of a carcinogen to mouse skin is becoming an increasingly common technique of assay of the carcinogenic forces of various substances. It has been pointed out (Pike and Roe, 1963) that a simple count of the number of cancers induced in a particular group is not a satisfactory measure of carcinogenic force, since cancers commonly occur late in life and a toxic treatment, although highly carcinogenic, may produce only a small number of cancers if it kills off a substantial fraction of the animals before the main cancersusceptible age range is reached. Allowance for the effects of intercurrent deaths on the numbers of cancers produced is therefore necessary before treatments can be compared. The method of Pike and Roe allows the unbiased estimation of the proportion of animals which would still be alive at a particular time if all causes of death other than cancer were eliminated, and the authors suggest that a plot of this estimated proportion against time gives the best description possible of the carcinogenic effects of a treatment. Although
Assessing Placebo Response Using Bayesian Hierarchical Survival Models
, 1995
"... The National Institute of Mental Health (NIMH) Collaborative Study of LongTerm Maintenance Drug Therapy in Recurrent Affective Illness was a multicenter randomized controlled clinical trial designed to determine the efficacy of a pharmacotherapy for the prevention of the recurrence of unipolar affe ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The National Institute of Mental Health (NIMH) Collaborative Study of LongTerm Maintenance Drug Therapy in Recurrent Affective Illness was a multicenter randomized controlled clinical trial designed to determine the efficacy of a pharmacotherapy for the prevention of the recurrence of unipolar affective disorders. The outcome of interest in this study was the time until the recurrence of a depressive episode. The data show much heterogeneity between centers for the placebo group. The aim of this paper is to use Bayesian hierarchical survival models to investigate the heterogeneity of placebo effects among centers in the NIMH study. This heterogeneity is explored in terms of the marginal posterior distributions of parameters of interest and predictive distributions of future observations. The Gibbs sampling algorithm is used to approximate posterior and predictive distributions. Sensitivity of results to the assumption of a constant hazard survival distribution at the first stage of th...
Analysis of progressively censored competing risks data
 In Handbook of Statistics 23: Advances in Survival Analysis, (Editors) N. Balakrishnan and C.R. Rao
, 2004
"... In several studies in Survival Analysis, the cause of failure / death of items or individuals may be attributable to more than one cause. In this chapter, we consider the competing risks model when the data is progressively TypeII censored. We provide di®erent techniques for the analysis of the mod ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In several studies in Survival Analysis, the cause of failure / death of items or individuals may be attributable to more than one cause. In this chapter, we consider the competing risks model when the data is progressively TypeII censored. We provide di®erent techniques for the analysis of the model under the assumption of independent causes of failure and exponential lifetimes. The maximum likelihood estimators of the di®erent parameters and the UMVUE's are obtained. In addition, the exact distributions of the di®erent estimators are derived. We also derive the UMP and UMPU test for the equality of the failure rates of the competing risks. We consider the Bayesian estimation using the Inverse Gamma distribution as a prior. To assess the performance of all these estimators, con¯dence intervals are developed using the exact, asymptotic, and bootstrap distributions. In the Bayesian context, we develop credible intervals for the parameters. The di®erent methods are compared through a simulation study, and the analysis of a real dataset. Finally, we also provide some insight into inference under the Weibull model and dependent causes of failure.
Modeling Migration Dynamics of Immigrants
 Tinbergen Institue Discussion Paper TI
, 2008
"... I ..."
Parametric Model Discrimination for Heavily Censored Survival Data
, 2008
"... Simultaneous discrimination among various parametric lifetime models is an important step in the parametric analysis of survival data. We consider a plot of the skewness versus the coefficient of variation for the purpose of discriminating among parametric survival models. We extend the method of C ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Simultaneous discrimination among various parametric lifetime models is an important step in the parametric analysis of survival data. We consider a plot of the skewness versus the coefficient of variation for the purpose of discriminating among parametric survival models. We extend the method of Cox & Oakes from complete to censored data by developing an algorithm based on a competing risks model and kernel function estimation. A byproduct of this algorithm is a nonparametric survival function estimate.
CURE RATE MODELS – A PARTIAL REVIEW WITH AN APPLICATION TO RECURRENT EVENT OR COUNT DATA
"... Cure rate models are survival models consisting of a cured fraction and an uncured fraction. These models are being widely used in analyzing data from cancer clinical trials. A model to estimate the cure fraction was first developed by Boag in 1949 and later developed by Berkson and Gage in 1952. It ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Cure rate models are survival models consisting of a cured fraction and an uncured fraction. These models are being widely used in analyzing data from cancer clinical trials. A model to estimate the cure fraction was first developed by Boag in 1949 and later developed by Berkson and Gage in 1952. It was called the mixture model. It is also known as the standard cure rate model. Yakovlev et.al. in 1993 developed an alternative to the mixture model. This model is known as the bounded cumulative hazard (BCH) model. It was developed by considering the number of metastasis competent tumor cells which were left active even after the initial treatment for a cancer patient. The model could overcome some of the drawbacks of the standard cure rate model. Parametric and semiparametric versions of the two models have been extensively studied. The bivariate extension of the univariate cure rate models include the joint modeling of times to relapse of the disease at two different organs, times to relapse of disease and death, times to occurrence of primary and secondary complications of a disease and joint modeling of timetoevent data and longitudinal data. These extensions