Results 1  10
of
38
Towards a digital body: the virtual arm illusion
 Front. Hum. Neurosci
, 2008
"... The integration of the human brain with computers is an interesting new area of applied neuroscience, where one application is replacement of a person’s real body by a virtual representation. Here we demonstrate that a virtual limb can be made to feel part of your body if appropriate multisensory co ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
The integration of the human brain with computers is an interesting new area of applied neuroscience, where one application is replacement of a person’s real body by a virtual representation. Here we demonstrate that a virtual limb can be made to feel part of your body if appropriate multisensory correlations are provided. We report an illusion that is invoked through tactile stimulation on a person’s hidden real right hand with synchronous virtual visual stimulation on an aligned 3D stereo virtual arm projecting horizontally out of their shoulder. An experiment with 21 male participants showed displacement of ownership towards the virtual hand, as illustrated by questionnaire responses and proprioceptive drift. A control experiment with asynchronous tapping was carried out with a different set of 20 male participants who did not experience the illusion. After 5 min of stimulation the virtual arm rotated. Evidence suggests that the extent of the illusion was also correlated with the degree of muscle activity onset in the right arm as measured by EMG during this period that the arm was rotating, for the synchronous but not the asynchronous condition. A completely virtual object can therefore be experienced as part of one’s self, which opens up the possibility that an entire virtual body could be felt as one’s own in future virtual reality applications or online games, and be an invaluable tool for the understanding of the brain mechanisms underlying body ownership.
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions
, 2000
"... The welldeveloped theory of exponential families of distributions is applied to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. These models are powerful tools for many forms of parametric data smoothi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The welldeveloped theory of exponential families of distributions is applied to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. These models are powerful tools for many forms of parametric data smoothing and are particularly wellsuited to problems in which there is little or no theory to guide a choice of probability models, e.g., smoothing a distribution to eliminate roughness and zero frequencies in order to equate scores from different tests. Attention is given to efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and to computationally efficient methods for obtaining the asymptotic standard errors of the fitted frequencies and proportions. We discuss tools that can be used to diagnose the quality of the fitted frequencies for both the univariate and the bivariate cases. Five examples, using real data, are used to illustrate the methods of this paper.
Negative Binomial Loglinear Mixed Models
"... The Poisson loglinear model is a common choice for explaining variability in counts. However, in many practical circumstances the restriction that the mean and variance are equal is not realistic. Overdispersion with respect to the Poisson distribution can be modeled explicitly by integrating with r ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The Poisson loglinear model is a common choice for explaining variability in counts. However, in many practical circumstances the restriction that the mean and variance are equal is not realistic. Overdispersion with respect to the Poisson distribution can be modeled explicitly by integrating with respect to a mixture distribution, and use of the conjugate gamma mixing distribution leads to a negative binomial loglinear model. This paper extends the negative binomial loglinear model to the case of dependent counts, where dependence among the counts is handled by including linear combinations of random eects in the linear predictor. If we assume that the vector of random eects is multivariate normal, then arbitrary forms of dependence can be modeled by appropriate specication of the covariance structure. Although the likelihood function for the resulting model is not tractable, maximum likelihood estimates (and standard errors) can be found using the NLMIXED procedure in SAS or, in more complicated examples, using a Monte Carlo EM algorithm. An alternate approach is to leave the random eects completely unspecied and attempt to estimate them using nonparametric maximum likelihood. The methodologies are illustrated with several examples. Key words and phrases: Monte Carlo EM; NLMIXED procedure; Nonparametric maximum likelihood; Overdispersion; Random eects; Booth and Hobert's research supported by NSF Grant DMS0072827. Casella's research supported by NSF Grant DMS9971586. 1 1
Statistical Methods for the Blood Beryllium Lymphocyte Proliferation Test
, 1996
"... The blood beryllium lymphocyte proliferation test (BeLPT) is a modification of the standard lymphocyte proliferation test that is used to identify persons who may have chronic beryllium disease. A major problem in the interpretation of BeLPT test results is outlying data values among the replicate w ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The blood beryllium lymphocyte proliferation test (BeLPT) is a modification of the standard lymphocyte proliferation test that is used to identify persons who may have chronic beryllium disease. A major problem in the interpretation of BeLPT test results is outlying data values among the replicate well counts ( ß 7%). A loglinear regression model is used to describe the expected well counts for each set of Be exposure conditions, and the variance of the well counts is proportional to the square of the expected count. Two outlier resistant regression methods are used to estimate stimulation indices (SIs) and the coefficient of variation. The first approach uses least absolute values (LAV) on the log of the well counts as a method for estimation; the second approach uses a resistant regression version of maximum quasilikelihood estimation. A major advantage of these resistant methods is that they make it unnecessary to identify and delete outliers. These two new methods for the statist...
Hierarchical Models for Permutations: Analysis of Auto Racing Results
 Journal of the American Statistical Association
, 2003
"... The popularity of the sport of auto racing is increasing rapidly, but its fans remain less interested in statistics than the fans of other sports. In this paper, we propose a new class of models for permutations which closely resembles the behavior of auto racing results. We pose the model in a Baye ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The popularity of the sport of auto racing is increasing rapidly, but its fans remain less interested in statistics than the fans of other sports. In this paper, we propose a new class of models for permutations which closely resembles the behavior of auto racing results. We pose the model in a Bayesian hierarchical framework. This framework permits hierarchical specification and fully hierarchical estimation of interaction terms. The methodology is demonstrated using several rich datasets which consist of repeated rankings for a collection of drivers. Our models can potentially identify individuals who are racing in “minor league ” divisions who have higher potential for competitive performance at higher levels. We also present evidence that one of the sport’s more controversial figures, Jeff Gordon, is a statistically dominant figure.
Consistent Model Selection Based on Parameter Estimates
, 2002
"... We consider model selection based on estimators that are asymptotically normal. Such a method can be applied to the context of estimating equations, since a complete specification of the probability model or likelihood function is not required. We construct a cost function for the models in consider ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider model selection based on estimators that are asymptotically normal. Such a method can be applied to the context of estimating equations, since a complete specification of the probability model or likelihood function is not required. We construct a cost function for the models in consideration, and show that the minimizer of the cost function is a consistent estimator of the model. Despite the absence of a likelihood function, the cost function is shown to be related to an approximate posterior probability conditional on the parameter estimates, which enables a Bayesiantype evaluation of all candidate models and not just to present one best choice. The proposed method is modular and easily adapted to di erent problems, since only one set of estimates of the parameters and asymptotic variance is needed as the input, which can be obtained from very different estimation techniques for very different models, both linear and nonlinear. We also show that by ranking Zstatistics, the scope of model searching can be reduced to achieve computing efficiency. We provide data analysis examples from two clinical trials and illustrate these variable selection techniques in the contexts of partial likelihood analysis and generalized estimating equations. A third example of used automobile prices illustrates an application of the methodology in selecting graphical models.
Bayesian Approaches for Overdispersion in Generalized Linear Models
, 1998
"... Generalized linear models (GLM's) have been routinely used in statistical data analysis. The evolution of these models as well as details regarding model fitting, model checking and inference is thoroughly documented in McCullagh and Nelder (1989). However, in many applications, heterogeneity in the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Generalized linear models (GLM's) have been routinely used in statistical data analysis. The evolution of these models as well as details regarding model fitting, model checking and inference is thoroughly documented in McCullagh and Nelder (1989). However, in many applications, heterogeneity in the observed samples is too large to be explained by the simple variance function which is implicit in GLM's. To overcome this, several parametric and nonparametric approaches for creating overdispersed generalized linear models (OGLM's) were developed. In this article, we summarize recent approaches to OGLM's, with special emphasis given to the Bayesian framework. We also discuss computational aspects of Bayesian model fitting, model determination and inference through examples. 1 Introduction Generalized linear models (GLM) are a standard class of models in contemporary statistical data analysis (McCullagh and Nelder 1989). The widely available GLIM software as well as SPlus facilitate compu...
Bayesian Inference for Heterogeneous Event Counts
, 2000
"... This paper presents a handful of Bayesian tools one can use to model heterogeneous event counts. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents a handful of Bayesian tools one can use to model heterogeneous event counts.
Overdispersion Diagnostics for Generalized Linear Models
, 1993
"... Generalized linear models (GLMs) are simple, convenient models for count data, but they assume that the variance is a specified function of the mean. Although overdispersed GLMs allow more flexible mean  variance relationships, they are often not as simple to interpret nor as easy to fit as standa ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Generalized linear models (GLMs) are simple, convenient models for count data, but they assume that the variance is a specified function of the mean. Although overdispersed GLMs allow more flexible mean  variance relationships, they are often not as simple to interpret nor as easy to fit as standard GLMs. This paper introduces a convexity plot, or Cplot for short, that detects overdispersion and relative variance curves and relative variance tests that help to understand the nature of the overdispersion. Convexity plots sometimes detect overdispersion better than score tests, and relative variance curves and tests sometimes distinguish the source of the overdispersion better than score tests. Keywords Mixture, random coefficient, residuals, score tests, variance inflation 1 INTRODUCTION Convenient generalized linear models (GLMs) for count data, such as logistic regression and loglinear Poisson regression, require the variance to be a known function of the mean. But count data ...