Results 1  10
of
33
Nonlinear Models for Repeated Measurement Data
, 1995
"... Nonlinear mixed effects models for data in the form of continuous, repeated measurements on each of a number of individuals, also known as hierarchical nonlinear models, are a popular platform for analysis when interest focuses on individualspecific characteristics. This framework first enjoyed wid ..."
Abstract

Cited by 157 (4 self)
 Add to MetaCart
Nonlinear mixed effects models for data in the form of continuous, repeated measurements on each of a number of individuals, also known as hierarchical nonlinear models, are a popular platform for analysis when interest focuses on individualspecific characteristics. This framework first enjoyed widespread attention within the statistical research community in the late 1980s, and the 1990s saw vigorous development of new methodological and computational techniques for these models, the emergence of generalpurpose software, and broad application of the models in numerous substantive fields. This article presents an overview of the formulation, interpretation, and implementation of nonlinear mixed effects models and surveys recent advances and applications.
Prior distributions for variance parameters in hierarchical models
 Bayesian Analysis
, 2006
"... Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new foldednoncentralt family of conditionally conjugate priors for hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors i ..."
Abstract

Cited by 149 (13 self)
 Add to MetaCart
Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new foldednoncentralt family of conditionally conjugate priors for hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inversegamma family of “noninformative ” prior distributions. We suggest instead to use a uniform prior on the hierarchical standard deviation, using the halft family when the number of groups is small and in other settings where a weakly informative prior is desired.
The interplay of bayesian and frequentist analysis
 Statist. Sci
, 2004
"... Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fi ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fight has become considerably muted, with the recognition that each approach has a great deal to contribute to statistical practice and each is actually essential for full development of the other approach. In this article, we embark upon a rather idiosyncratic walk through some of these issues. Key words and phrases: Admissibility; Bayesian model checking; conditional frequentist; confidence intervals; consistency; coverage; design; hierarchical models; nonparametric
Of beauty, sex, and power: statistical challenges in estimating small effects
, 2007
"... How do we interpret findings that are intriguing, potentially important, but not statistically significant? We discuss in the context of a series of papers in the Journal of Theoretical Biology that reported evidence that beautiful parents have more daughters, violent men have more sons, and other s ..."
Abstract

Cited by 14 (13 self)
 Add to MetaCart
How do we interpret findings that are intriguing, potentially important, but not statistically significant? We discuss in the context of a series of papers in the Journal of Theoretical Biology that reported evidence that beautiful parents have more daughters, violent men have more sons, and other sexratio patterns (Kanazawa, 2005, 2006, 2007). These papers have been shown to have statistical errors, but the more general research questions remain. From a classical statistical perspective, these studies have insufficient power to detect the magnitudes of effects (on the order of 1 percentage point) that could be expected based on earlier studies of sex ratios. The anticipated small effects can also be handled using a Bayesian prior distribution. These concerns are relevant to other studies of small effects and also to the reporting of such studies.
Transformed and parameterexpanded Gibbs samplers for multilevel linear and generalized linear models
, 2004
"... Hierarchical linear and generalized linear models can be fit using Gibbs samplers and Metropolis algorithms; these models, however, often have many parameters, and convergence of the seemingly most natural Gibbs and Metropolis algorithms can sometimes be slow. We examine solutions that involve repar ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Hierarchical linear and generalized linear models can be fit using Gibbs samplers and Metropolis algorithms; these models, however, often have many parameters, and convergence of the seemingly most natural Gibbs and Metropolis algorithms can sometimes be slow. We examine solutions that involve reparameterization and overparameterization. We begin with parameter expansion using working parameters, a strategy developed for the EM algorithm by Meng and van Dyk (1997) and Liu, Rubin, and Wu (1998). This strategy can lead to algorithms that are much less susceptible to becoming stuck near zero values of the variance parameters than are more standard algorithms. Second, we consider a simple rotation of the regression coefficients based on an estimate of their posterior covariance matrix. This leads to a Gibbs algorithm based on updating the transformed parameters one at a time or a Metropolis algorithm with vector jumps; either of these algorithms can perform much better (in terms of total CPU time) than the two standard algorithms: oneatatime updating of untransformed parameters or vector updating using a linear regression at each step. We present an innovative evaluation of the algorithms in terms of how quickly they can get away from remote areas of parameter space, along with some more standard evaluation of computation and convergence speeds. We illustrate our methods with examples from our applied work. Our ultimate goal is to develop a fast and reliable method for fitting a hierarchical linear model as easily as one can now fit a nonhierarchical model, and to increase understanding of Gibbs samplers for hierarchical models in general. Keywords: Bayesian computation, blessing of dimensionality, Markov chain Monte Carlo, multilevel modeling, mixed effects models, PXEM algorithm, random effects regression, redundant
Bayesian measures of explained variance and pooling in multilevel (hierarchical) models
 Technometrics
, 2006
"... Explained variance (R 2) is a familiar summary of the fit of a linear regression and has been generalized in various ways to multilevel (hierarchical) models. The multilevel models we consider in this paper are characterized by hierarchical data structures in which individuals are grouped into units ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Explained variance (R 2) is a familiar summary of the fit of a linear regression and has been generalized in various ways to multilevel (hierarchical) models. The multilevel models we consider in this paper are characterized by hierarchical data structures in which individuals are grouped into units (which themselves might be further grouped into larger units), and there are variables measured on individuals and each grouping unit. The models are based on regression relationships at different levels, with the first level corresponding to the individual data, and subsequent levels corresponding to betweengroup regressions of individual predictor effects on grouping unit variables. We present an approach to defining R 2 at each level of the multilevel model, rather than attempting to create a single summary measure of fit. Our method is based on comparing variances in a single fitted model rather than comparing to a null model. In simple regression, our measure generalizes the classical adjusted R 2. We also discuss a related variance comparison to summarize the degree to which estimates at each level of the model are pooled together based on the levelspecific regression relationship, rather than estimated separately. This pooling factor is related to the concept of shrinkage in simple hierarchical models. We illustrate the methods on a dataset of radon in houses within counties using a series of models ranging from a simple linear regression model to a multilevel varyingintercept, varyingslope model.
Using redundant parameterizations to fit hierarchical models
 Journal of Computational and Graphical Statistics
, 2008
"... Hierarchical linear and generalized linear models can be fit using Gibbs samplers and Metropolis algorithms; these models, however, often have many parameters, and convergence of the seemingly most natural Gibbs and Metropolis algorithms can sometimes be slow. We examine solutions that involve repar ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Hierarchical linear and generalized linear models can be fit using Gibbs samplers and Metropolis algorithms; these models, however, often have many parameters, and convergence of the seemingly most natural Gibbs and Metropolis algorithms can sometimes be slow. We examine solutions that involve reparameterization and overparameterization. We begin with parameter expansion using working parameters, a strategy developed for the EM algorithm. This strategy can lead to algorithms that are much less susceptible to becoming stuck near zero values of the variance parameters than are more standard algorithms. Second, we consider a simple rotation of the regression coefficients based on an estimate of their posterior covariance matrix. This leads to a Gibbs algorithm based on updating the transformed parameters one at a time or a Metropolis algorithm with vector jumps; either of these algorithms can perform much better (in terms of total CPU time) than the two standard algorithms: oneatatime updating of untransformed parameters or vector updating using a linear regression at each step. We present an innovative evaluation of the algorithms in terms of how quickly they can get away from remote areas of parameter space, along with some
An Empirical Evaluation of Chernoff Faces, Star Glyphs, and Spatial Visualizations for Binary Data
 In APVis ’03: Proceedings of the AsiaPacific symposium on Information visualisation
, 2003
"... Data visualizatio n has the poE tialto assist humans in analyzing and co prehending large vo umes o data, andto detect patterns, clusters ando utliers that are n o o vi oq using noq graphical f ms o presentatiot Fo this reas oq data visualizatio ns have an impo rtant ro le to play in ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Data visualizatio n has the poE tialto assist humans in analyzing and co prehending large vo umes o data, andto detect patterns, clusters ando utliers that are n o o vi oq using noq graphical f ms o presentatiot Fo this reas oq data visualizatio ns have an impo rtant ro le to play in a diverse range o applied pro blems, including data explo atio n and mining, info rmatio n retrieval, and intelligence analysis.
Bayesian Latent Semantic Analysis of Multimedia Databases
, 2001
"... We present a Bayesian mixture model for probabilistic latent semantic analysis of documents with images and text. The Bayesian perspective allows us to perform automatic regularisation to obtain sparser and more coherent clustering models. It also enables us to encode a priori knowledge, such as ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We present a Bayesian mixture model for probabilistic latent semantic analysis of documents with images and text. The Bayesian perspective allows us to perform automatic regularisation to obtain sparser and more coherent clustering models. It also enables us to encode a priori knowledge, such as word and image preferences. The learnt model can be used for browsing digital databases, information retrieval with image and/or text queries, image annotation (adding words to an image) and text illustration (adding images to a text).
Estimating size and composition of biological communities by modeling the occurrence of species
 Journal of the American Statistical Association
, 2005
"... We develop a model that uses repeated observations of a biological community to estimate the number and composition of species in the community. Estimators of communitylevel attributes are constructed from modelbased estimators of occurrence of individual species that incorporate imperfect detecti ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We develop a model that uses repeated observations of a biological community to estimate the number and composition of species in the community. Estimators of communitylevel attributes are constructed from modelbased estimators of occurrence of individual species that incorporate imperfect detection of individuals. Data from the North American Breeding Bird Survey are analyzed to illustrate the variety of ecologicallyimportant quantities that are easily constructed and estimated using our modelbased estimators of species occurrence. In particular, we compute sitespecific estimates of species richness that honor classical notions of speciesarea relationships. We suggest extensions of our model to estimate maps of occurrence of individual species and to compute inferences related to the temporal and spatial dynamics of biological communities.