## Inference in Generalized Additive Mixed Models Using Smoothing Splines (1999)

Citations: | 45 - 4 self |

### BibTeX

@MISC{Zhang99inferencein,

author = {Xihong Daowen Zhang and Xihong Liny and Daowen Zhang},

title = {Inference in Generalized Additive Mixed Models Using Smoothing Splines},

year = {1999}

}

### Years of Citing Articles

### OpenURL

### Abstract

this paper, we propose generalized additive mixed models (GAMMs), which are an additive extension of generalized linear mixed models in the spirit of Hastie and Tibshirani (1990). This new class of models uses additive nonparametric functions to model covariate effects while accounting for overdispersion and correlation by adding random effects to the additive predictor. GAMMs encompass nested and crossed designs and are applicable to clustered, hierarchical and spatial data. We estimate the nonparametric functions using smoothing splines, and jointly estimate the smoothing parameters and the variance components using marginal quasi-likelihood. This marginal quasilikelihood approach is an extension of the restricted maximum likelihood approach used by Wahba (1985) and Kohn, et al. (1991) in the classical nonparametric regression model (Kohn, et al. 1991, eq 2.1), and by Zhang, et al. (1998) in Gaussian nonparametric mixed models, where they treated the smoothing parameter as an extra variance component. In view of numerical integration often required by maximizing the objective functions, double penalized quasi-likelihood (DPQL) is proposed to make approximate inference. Frequentist and Bayesian inferences are compared. A key feature of the proposed method is that it allows us to make systematic inference on all model components of GAMMs within a unified parametric mixed model framework. Specifically, our estimation of the nonparametric functions, the smoothing parameters and the variance components in GAMMs can proceed by fitting a working GLMM using existing statistical software, which iteratively fits a linear mixed model to a modified dependent variable. When the data are sparse (e.g., binary), the DPQL estimators of the variance components are found to be subject t...