Results 11  20
of
153
Functional adaptive model estimation
 J. Amer
, 2005
"... In this article we are interested in modeling the relationship between a scalar, Y, and a functional predictor, X(t). We introduce a highly flexible approach called ”Functional Adaptive Model Estimation” (FAME) which extends generalized linear models (GLM), generalized additive models (GAM) and proj ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
In this article we are interested in modeling the relationship between a scalar, Y, and a functional predictor, X(t). We introduce a highly flexible approach called ”Functional Adaptive Model Estimation” (FAME) which extends generalized linear models (GLM), generalized additive models (GAM) and projection pursuit regression (PPR) to handle functional predictors. The FAME approach can model any of the standard exponential family of response distributions that are assumed for GLM or GAM while maintaining the flexibility of PPR. For example standard linear or logistic regression with functional predictors, as well as far more complicated models, can easily be applied using this approach. A functional principal components decomposition of the predictor functions is used to aid visualization of the relationship between X(t) and Y. We also show how the FAME procedure can be extended to deal with multiple functional and standard finite dimensional predictors, possibly with missing data. The FAME approach is illustrated on simulated data as well as on the prediction of arthritis based on bone shape. We end with a discussion of the relationships between standard regression approaches, their extensions to functional data and FAME.
Functional linear regression analysis for longitudinal data
 Ann. of Statist
, 2005
"... We propose nonparametric methods for functional linear regression which are designed for sparse longitudinal data, where both the predictor and response are functions of a covariate such as time. Predictor and response processes have smooth random trajectories, and the data consist of a small number ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
We propose nonparametric methods for functional linear regression which are designed for sparse longitudinal data, where both the predictor and response are functions of a covariate such as time. Predictor and response processes have smooth random trajectories, and the data consist of a small number of noisy repeated measurements made at irregular times for a sample of subjects. In longitudinal studies, the number of repeated measurements per subject is often small and may be modeled as a discrete random number and, accordingly, only a finite and asymptotically nonincreasing number of measurements are available for each subject or experimental unit. We propose a functional regression approach for this situation, using functional principal component analysis, where we estimate the functional principal component scores through conditional expectations. This allows the prediction of an unobserved response trajectory from sparse measurements of a predictor trajectory. The resulting technique is flexible
Bayesian information criterion for censored survival models
 Biometrics
"... We investigate the Bayesian Information Criterion (BIC) for variable selection in models for censored survival data. Kass and Wasserman (1995) showed that BIC provides a close approximation to the Bayes factor when a unitinformation prior on the parameter space is used. We propose a revision of the ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
We investigate the Bayesian Information Criterion (BIC) for variable selection in models for censored survival data. Kass and Wasserman (1995) showed that BIC provides a close approximation to the Bayes factor when a unitinformation prior on the parameter space is used. We propose a revision of the penalty term in BIC so that it is de ned in terms of the number of uncensored events instead of the number of observations. For the simplest censored data model, that of exponential distributions of survival times (i.e. a constant hazard rate), this revision results in a better approximation to the exact Bayes factor based on a conjugate unitinformation prior. In the Cox proportional hazards regression model, we propose de ning BIC in terms of the maximized partial likelihood. Using the number of deaths rather than the number of individuals in the BIC penalty term corresponds to a more realistic prior on the parameter space, and is shown to improve predictive performance for assessing stroke risk in the Cardiovascular Health Study.
Local Likelihood and Local Partial Likelihood in Hazard Regression
 Ann. Statist
, 1996
"... In survival analysis, the relationship between a survival time and a covariate is conveniently modeled with the proportional hazards regression model. This model usually assumes that the covariate has a loglinear eect on the hazard function. In this paper we consider the proportional hazards regres ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
In survival analysis, the relationship between a survival time and a covariate is conveniently modeled with the proportional hazards regression model. This model usually assumes that the covariate has a loglinear eect on the hazard function. In this paper we consider the proportional hazards regression model with a nonparametric risk eect. We discuss estimation of the risk function and its derivatives in two cases: when the baseline hazard function is parametrized and when it is not parametrized. In the case of a parametric baseline hazard function, inference is based on a local version of the likelihood function, while in the case of a nonparametric baseline hazard, we use a local version of the partial likelihood. This results in maximum local likelihood estimators and maximum local partial likelihood estimators respectively. We establish the asymptotic normality of the estimators. It turns out that both methods have the same asymptotic bias and variance in a common situation, eve...
Efficient estimation of the partly linear additive Cox
, 1999
"... Abstract: The partly linear additive Cox model is an extention of the (linear) Cox model and allows flexible modeling of covariate effects semiparametrically. We study asymptotic properties of the maximum partial likelihood estimator of this model with rightcensored data using polynomial splines. W ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Abstract: The partly linear additive Cox model is an extention of the (linear) Cox model and allows flexible modeling of covariate effects semiparametrically. We study asymptotic properties of the maximum partial likelihood estimator of this model with rightcensored data using polynomial splines. We show that, with a range of choices of the smoothing parameter (the number of spline basis functions) required for estimation of the nonparametric components, the estimator of the finitedimensional regression parameter is rootn consistent, asymptotically normal and achieves the semiparametric information bound. Rates of convergence for the estimators of the nonparametric components are obtained, they are comparable to the rates in nonparametric regression. Implementation of the estimation approach can be done easily and is illustrated by using a simulated example. Abbreviated title: Partly linear additive Cox model
RANDOM SURVIVAL FORESTS
, 2008
"... We introduce random survival forests, a random forests method for the analysis of rightcensored survival data. New survival splitting rules for growing survival trees are introduced, as is a new missing data algorithm for imputing missing data. A conservationofevents principle for survival forest ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We introduce random survival forests, a random forests method for the analysis of rightcensored survival data. New survival splitting rules for growing survival trees are introduced, as is a new missing data algorithm for imputing missing data. A conservationofevents principle for survival forests is introduced and used to define ensemble mortality, a simple interpretable measure of mortality that can be used as a predicted outcome. Several illustrative examples are given, including a case study of the prognostic implications of body mass for individuals with coronary artery disease. Computations for all examples were implemented using the freely available Rsoftware package, randomSurvivalForest.
Graphical models for marked point processes based on local independence
 J.R Statist. Soc. B
"... A new class of graphical models capturing the dependence structure of events that occur in time is proposed. The graphs represent so–called local independencies, meaning that the intensities of certain types of events are independent of some (but not necessarilly all) events in the past. This dynami ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
A new class of graphical models capturing the dependence structure of events that occur in time is proposed. The graphs represent so–called local independencies, meaning that the intensities of certain types of events are independent of some (but not necessarilly all) events in the past. This dynamic concept of independence is asymmetric, similar to Granger non–causality, so that the corresponding local independence graphs differ considerably from classical graphical models. Hence a new notion of graph separation, called δ–separation, is introduced and implications for the underlying model as well as for likelihood inference are explored. Benefits regarding facilitation of reasoning about and understanding of dynamic dependencies as well as computational simplifications are discussed.
Penalized likelihood regression: General formulation and efficient approximation
, 2002
"... The authors consider a general formulation of penalized likelihood regression, which covers canonical and noncanonical links for exponential families as well as accelerated life models with censored survival data. They present an asymptotic analysis of convergence rates to justify a simple approach ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
The authors consider a general formulation of penalized likelihood regression, which covers canonical and noncanonical links for exponential families as well as accelerated life models with censored survival data. They present an asymptotic analysis of convergence rates to justify a simple approach to the lowerdimensional approximation of the estimates. The lowerdimensional approximation allows for much faster numerical calculation, paving the way to the development of algorithms that scale well with large data sets.
Rate of Convergence for Hazard Regression
 Scand. J. Statist
, 1994
"... The logarithm of the conditional hazard function of a survival time given one or more covariates is approximated by a function having the form of a specified sum of functions of at most d of the variables. Subject to this form, the approximation is chosen to maximize the expected conditional logli ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
The logarithm of the conditional hazard function of a survival time given one or more covariates is approximated by a function having the form of a specified sum of functions of at most d of the variables. Subject to this form, the approximation is chosen to maximize the expected conditional loglikelihood. Maximum likelihood and sums of tensor products of polynomial splines are used to construct an estimate of this approximation based on a random sample. The components of this estimate possess a rate of convergence that depends only on d and a suitably defined smoothness parameter. KEY WORDS: Conditional hazard function; Maximum likelihood; Tensor product splines. This research was supported in part by a grant from the Graduate School Fund of the University of Washington. y This research was supported in part by National Science Foundation Grant DMS9204247 z This research was supported in part by a Research Council Grant from the University of North Carolina. 1 1. Introducti...
Regularized estimation in the accelerated failure time model with high dimensional covariates. Biometrics
 Biometrics
, 2005
"... Summary The need for analyzing failure time data with highdimensional covariates arises in investigating the relationship between a censored survival outcome and microarray gene expression profiles. We consider two regularization approaches, the LASSO and the threshold gradient directed regularizat ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Summary The need for analyzing failure time data with highdimensional covariates arises in investigating the relationship between a censored survival outcome and microarray gene expression profiles. We consider two regularization approaches, the LASSO and the threshold gradient directed regularization, for variable selection and estimation in the accelerated failure time model with highdimensional covariates based on Stute’s weighted least squares method. The Stute estimator uses the KaplanMeier weights to account for censoring in the least squares criterion. The weighted least squares objective function makes the adaption of this approach to high dimensional covariate settings computationally feasible. We use the Vfold cross validation and a modified Akaike’s Information Criterion for tuning parameter selection, and a bootstrap approach for variance estimation. The proposed method is evaluated using simulations and demonstrated with a real data example.