Results 1  10
of
262
Classical and Bayesian inference in neuroimaging: Theory
 NeuroImage
, 2002
"... This paper reviews hierarchical observation models, used in functional neuroimaging, in a Bayesian light. It emphasizes the common ground shared by classical and Bayesian methods to show that conventional analyses of neuroimaging data can be usefully extended within an empirical Bayesian framework. ..."
Abstract

Cited by 99 (37 self)
 Add to MetaCart
This paper reviews hierarchical observation models, used in functional neuroimaging, in a Bayesian light. It emphasizes the common ground shared by classical and Bayesian methods to show that conventional analyses of neuroimaging data can be usefully extended within an empirical Bayesian framework. In particular we formulate the procedures used in conventional data analysis in terms of hierarchical linear models and establish a connection between classical inference and parametric empirical Bayes (PEB) through covariance component estimation. This estimation is based on an expectation maximization or EM algorithm. The key point is that hierarchical models not only provide for appropriate inference at the highest level but that one can revisit lower levels suitably
Nonparametric Mixed Effects Models for Unequally Sampled Noisy Curves
 Biometrics
, 1998
"... We propose a method of analyzing collections of related curves in which the individual curves are modeled as spline functions with random coefficients. The method is applicable when the individual curves are sampled at variable and irregularly spaced points. This produces a low rank, low frequency a ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
We propose a method of analyzing collections of related curves in which the individual curves are modeled as spline functions with random coefficients. The method is applicable when the individual curves are sampled at variable and irregularly spaced points. This produces a low rank, low frequency approximation to the covariance structure, which can be estimated naturally by the EM algorithm. Smooth curves for individual trajectories are constructed as BLUP estimates, combining data from that individual and the entire collection. This framework leads naturally to methods for examining the effects of covariates on the shapes of the curves. We use model selection techniquesAIC, BIC, and crossvalidation to select the number of breakpoints for the spline approximation. We believe that the methodology we propose provides a simple, flexible, and computationally efficient means of functional data analysis. We illustrate it with two sets of data. 1 Introduction In recent years there ha...
Approximations to the Loglikelihood Function in the Nonlinear Mixed Effects Model
 Journal of Computational and Graphical Statistics
, 1995
"... Introduction. Several different nonlinear mixed effects models and estimation methods for their parameters have been proposed in recent years (Sheiner and Beal, 1980; Mallet, Mentre, Steimer and Lokiek, 1988; Lindstrom and Bates, 1990; Vonesh and Carter, 1992; Davidian and Gallant, 1992; Wakefield, ..."
Abstract

Cited by 55 (4 self)
 Add to MetaCart
Introduction. Several different nonlinear mixed effects models and estimation methods for their parameters have been proposed in recent years (Sheiner and Beal, 1980; Mallet, Mentre, Steimer and Lokiek, 1988; Lindstrom and Bates, 1990; Vonesh and Carter, 1992; Davidian and Gallant, 1992; Wakefield, Smith, RacinePoon and Gelfand, 1994). We consider here a slightly modified version of the model proposed in Lindstrom and Bates (1990). This model can be viewed as a hierarchical model that in some ways generalizes both the linear mixed effects model of Laird and Ware (1982) and the usual nonlinear model for independent data (Bates and Watts, 1988). In the first stage the jth observation on the ith cluster is modeled as y ij = f(OE ij ; x ij ) + ffl ij ; i = 1; : : : ; M; j = 1; : : : ; n i<F
Generalized linear models with functional predictors
 Journal of the Royal Statistical Society, Series B
, 2002
"... In this paper we present a technique for extending generalized linear models (GLM) to the situation where some of the predictor variables are observations from a curve or function. The technique is particularly useful when only fragments of each curve have been observed. We demonstrate, on both simu ..."
Abstract

Cited by 46 (6 self)
 Add to MetaCart
In this paper we present a technique for extending generalized linear models (GLM) to the situation where some of the predictor variables are observations from a curve or function. The technique is particularly useful when only fragments of each curve have been observed. We demonstrate, on both simulated and real world data sets, how this approach can be used to perform linear, logistic and censored regression with functional predictors. In addition, we show how functional principal components can be used to gain insight into the relationship between the response and functional predictors. Finally, we extend the methodology to apply GLM and principal components to standard missing data problems.
Restricted maximum likelihood estimation for animal models using derivatives of the likelihood
, 1996
"... A Restricted Maximum Likelihood procedure is described to estimate variance components for a univariate mixed model with two random factors. An EMtype algorithm is presented with a reparameterisation to speed up the rate of convergence. Computing strategies are outlined for models common to the ana ..."
Abstract

Cited by 39 (21 self)
 Add to MetaCart
A Restricted Maximum Likelihood procedure is described to estimate variance components for a univariate mixed model with two random factors. An EMtype algorithm is presented with a reparameterisation to speed up the rate of convergence. Computing strategies are outlined for models common to the analysis of animal breeding data, allowing for both a nested and a crossclassified design of the 2 random factors. Two special cases are considered: firstly, the total number of levels of fixed effects is small compared to the number of levels of both random factors; secondly, one fixed effect with a large number of levels is to be fitted in addition to other fixed effects with few levels. A small numerical example is given to illustrate details.
Waveletbased functional mixed models
 Journal of the Royal Statistical Society, Series B
, 2006
"... Summary. Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framew ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
Summary. Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian waveletbased approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and betweencurve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and randomeffects functions as well as the various betweencurve and withincurve covariance matrices.The functional fixed effects are adaptively regularized as a result of the nonlinear shrinkage prior that is imposed on the fixed effectsâ€™ wavelet coefficients, and the randomeffect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model.The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks.
Functional linear discriminant analysis for irregularly sampled curves
 Journal of the Royal Statistical Society, Series B, Methodological
, 2001
"... We introduce a technique for extending the classical method of Linear Discriminant Analysis to data sets where the predictor variables are curves or functions. This procedure, which we call functionallinear discriminant analysis (FLDA), is particularly useful when only fragments of the curves are ob ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
We introduce a technique for extending the classical method of Linear Discriminant Analysis to data sets where the predictor variables are curves or functions. This procedure, which we call functionallinear discriminant analysis (FLDA), is particularly useful when only fragments of the curves are observed. All the techniques associated with LDA can be extended for use with FLDA. In particular FLDA can be used to produce classifications on new (test) curves, give an estimate of the discriminant function between classes, and provide a one or two dimensional pictorial representation of a set of curves. We also extend this procedure to provide generalizations of quadratic and regularized discriminant analysis.
Parameter expansion to accelerate EM: The PXEM algorithm
, 1998
"... The EM algorithm and its extensions are popular tools for modal estimation but are often criticised for their slow convergence. We propose a new method that can often make EM much faster. The intuitive idea is to use a 'covariance adjustment ' to correct the analysis of the M step, capitalising on e ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
The EM algorithm and its extensions are popular tools for modal estimation but are often criticised for their slow convergence. We propose a new method that can often make EM much faster. The intuitive idea is to use a 'covariance adjustment ' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data. The way we accomplish this is by parameter expansion; we expand the completedata model while preserving the observeddata model and use the expanded completedata model to generate EM. This parameterexpanded EM, PXEM, algorithm shares the simplicity and stability of ordinary EM, but has a faster rate of convergence since its M step performs a more efficient analysis. The PXEM algorithm is illustrated for the multivariate t distribution, a random effects model, factor analysis, probit regression and a Poisson imaging model.
Bayesian Deviance, the Effective Number of Parameters, and the Comparison of Arbitrarily Complex Models
, 1998
"... We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of p ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. We follow Dempster in examining the posterior distribution of the loglikelihood under each model, from which we derive measures of fit and complexity (the effective number of parameters). These may be combined into a Deviance Information Criterion (DIC), which is shown to have an approximate decisiontheoretic justification. Analytic and asymptotic identities reveal the measure of complexity to be a generalisation of a wide range of previous suggestions, with particular reference to the neural network literature. The contributions of individual observations to fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. The procedure is illustrated in a number of examples, and throughout it is emphasised that the required quantities are trivial to compute in a Markov chain Monte Carlo analysis, and require no analytic work for new...