Results 1  10
of
35
Global, voxel, and cluster tests, by theory and permutation, for a difference between two groups of univariate analysis of ERPs/ERFs I: Review 1725 structural MR images of the brain
 IEEE Transactions on Medical Imaging
, 1999
"... Abstract—We describe almost entirely automated procedures for estimation of global, voxel, and clusterlevel statistics to test the null hypothesis of zero neuroanatomical difference between two groups of structural magnetic resonance imaging (MRI) data. Theoretical distributions under the null hypo ..."
Abstract

Cited by 130 (16 self)
 Add to MetaCart
(Show Context)
Abstract—We describe almost entirely automated procedures for estimation of global, voxel, and clusterlevel statistics to test the null hypothesis of zero neuroanatomical difference between two groups of structural magnetic resonance imaging (MRI) data. Theoretical distributions under the null hypothesis are available for 1) global tissue class volumes; 2) standardized linear model [analysis of variance (ANOVA and ANCOVA)] coefficients estimated at each voxel; and 3) an area of spatially connected clusters generated by applying an arbitrary threshold to a twodimensional (2D) map of normal statistics at voxel level. We describe novel methods for economically ascertaining probability distributions under the null hypothesis, with fewer assumptions, by permutation of the observed data. Nominal Type I error control by permutation testing is generally excellent; whereas theoretical distributions may be over conservative. Permutation has the additional advantage that it can be used to test any statistic of interest, such as the sum of suprathreshold voxel statistics in a cluster (or cluster mass), regardless of its theoretical tractability under the null hypothesis. These issues are illustrated by application to MRI data acquired from 18 adolescents with hyperkinetic disorder and 16 control subjects matched for age and gender. Index Terms — Brain, imaging/mapping, probability distributions, statistics.
Modelling selection harvesting in tropical rain forests
 JOURNAL OF TROPICAL FOREST SCIENCE
, 1989
"... Long term yield estimates for natural forests require a harvesting model to enable future yields to be estimated reliably. The model should predict the felled stems, the proportion of these which are merchantable, and any damage to the residual stand. Regression analyses was used to develop a model ..."
Abstract

Cited by 15 (14 self)
 Add to MetaCart
Long term yield estimates for natural forests require a harvesting model to enable future yields to be estimated reliably. The model should predict the felled stems, the proportion of these which are merchantable, and any damage to the residual stand. Regression analyses was used to develop a model of current logging practice in the rain forests of north Queensland. Logistic functions predict the probability of any tree being marked for logging, the probability of a felled tree being merchantable, and the probability of any tree in the residual stand being damaged by logging. Important predictor variables included tree species and size, merchantable basal area, basal area logged, logging history, and topography. There was no evidence to suggest that soil type or site quality influenced current treemarking practice. The approach is applicable to other mixed forest types managed for selection logging.
Developing a Model for the Measurement
 of Social Inclusion and Social Capital in Regional Australia. Social Indicators Research
, 2006
"... This paper describes an approach to the issue of selecting a measurement model that is based on a comprehensive framework for measurement consisting of four conceptual building blocks: The construct map, the items design, the outcome space, and the measurement model. Starting from this framework of ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
This paper describes an approach to the issue of selecting a measurement model that is based on a comprehensive framework for measurement consisting of four conceptual building blocks: The construct map, the items design, the outcome space, and the measurement model. Starting from this framework of building blocks, the measurement model selected must conform to the constraints imposed by the other three components. Specifically, to preserve the interpretability of the construct map, the models must preserve the order of items throughout the range of person locations, and must do so in a way that is consistent with the interpretational requirements of the map. In the case of item response modeling, this translates into selecting models that have a constant slope—i.e., they must be from the Rasch family. In the conclusion, some next steps in investigating these issues are discussed.
Minimum Bias with Generalized Linear Models
 Proceedings of the Casualty Actuarial Society 75
, 1988
"... The paper “Insurance Rates with Minimum Bias ” by Robert A. Bailey [3] presents a methodology which is used by a large number of Canadian casualty actuaries to determine class and driving record differentials. In his paper, Bailey outlines four methods (two directly and two by reference to a previou ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The paper “Insurance Rates with Minimum Bias ” by Robert A. Bailey [3] presents a methodology which is used by a large number of Canadian casualty actuaries to determine class and driving record differentials. In his paper, Bailey outlines four methods (two directly and two by reference to a previous paper by Bailey and Simon). No presentation has ever been made of an analysis of the applicability of these methods on Canadian data. Also, no attempt has been made within the Casualty Actuarial Society literature to augment Bailey’s discussion using other statistical approaches now familiar to members of the Society. This paper analyzes the four Bailey methodologies using Canadian data and then introduces five models using a modern statistical approach. (It should be noted that one of these statistical models turns out to be a reproduction of one of Bailey’s
Comparison of logistic regression and linear regression in modeling percentage data
"... Percentage is widely used to describe different results in food microbiology, e.g., probability of microbial growth, percent inactivated, and percent of positive samples. Four sets of percentage data, percentgrowthpositive, germination extent, probability for one cell to grow, and maximum fraction ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Percentage is widely used to describe different results in food microbiology, e.g., probability of microbial growth, percent inactivated, and percent of positive samples. Four sets of percentage data, percentgrowthpositive, germination extent, probability for one cell to grow, and maximum fraction of positive tubes, were obtained from our own experiments and the literature. These data were modeled using linear and logistic regression. Five methods were used to compare the goodness of fit of the two models: percentage of predictions closer to observations, range of the differences (predicted value minus observed value), deviation of the model, linear regression between the observed and predicted values, and bias and accuracy factors. Logistic regression was a better predictor of at least 78 % of the observations in all four data sets. In all cases, the deviation of logistic models was much smaller. The linear correlation between observations and logistic predictions was always stronger. Validation (accomplished using part of one data set) also demonstrated that the logistic model was more accurate in predicting new data points. Bias and accuracy factors were found to be less informative when evaluating models developed for percentage data, since neither of these indices can compare predictions at zero. Model simplification for the logistic model was demonstrated with one data set. The simplified model was as powerful in making predictions as the full linear model, and it also gave clearer insight in determining the key experimental factors.
On the Variation in Size and Composition of Minke Whale (Balaenoptera acutorostrata) Forestomach Contents
 Journal of Northwest Atlantic Fishery Science
, 1996
"... ..."
What Happens When a 1 × 1 × r Die is Rolled
 The American Statistician
, 2003
"... This paper examines the probabilities of outcomes from rolling dice with the dimension 1 1 r for various values of r. Experiments were conducted by school students and University students. The results of the experiments are given and the probabilities examined using a generalized linear model. N ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper examines the probabilities of outcomes from rolling dice with the dimension 1 1 r for various values of r. Experiments were conducted by school students and University students. The results of the experiments are given and the probabilities examined using a generalized linear model. Notes are also made about the value of the experiment in teaching each group of students.
Bayesian Posterior Comprehension via Message from Monte Carlo
, 2003
"... We discuss the problem of producing an epitome, or brief summary, of a Bayesian posterior distribution  and then investigate a general solution based on the Minimum Message Length (MML) principle. Clearly, the optimal criterion for choosing such an epitome is determined by the epitome's intend ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We discuss the problem of producing an epitome, or brief summary, of a Bayesian posterior distribution  and then investigate a general solution based on the Minimum Message Length (MML) principle. Clearly, the optimal criterion for choosing such an epitome is determined by the epitome's intended use. The interesting general case is where this use is unknown since, in order to be practical, the choice of epitome criterion becomes subjective. We identify a number of desirable properties that an epitome could have  facilitation of point estimation, human comprehension, and fast approximation of posterior expectations. We call these the properties of Bayesian Posterior Comprehension and show that the Minimum Message Length principle can be viewed as an epitome criterion that produces epitomes having these properties. We then present and extend Message from Monte Carlo as a means for constructing instantaneous Minimum Message Length codebooks (and epitomes) using Markov Chain Monte Carlo methods. The Message from Monte Carlo methodology is illustrated for binary regression, generalised linear model, and multiple changepoint problems.
Applications and Extensions of a Technique for Estimator Densities
"... Abstract—Applications are given of a formula for the exact probability density function of the maximum likelihood estimates of a statistical model, where the data generating model is allowed to differ from the estimation model. The main examples are supported by simulation experiments. Curved expone ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Applications are given of a formula for the exact probability density function of the maximum likelihood estimates of a statistical model, where the data generating model is allowed to differ from the estimation model. The main examples are supported by simulation experiments. Curved exponential families are investigated, for which an approach is described that can be used in many practical situations. The distribution of a maximum likelihood estimator in exponential regression is developed. Nonlinear regression is then considered, with an example of a model discrepancy situation arising in ELISA immunoassays and similar biochemical titrations. An incorrect logistic model is specified for a titration curve that is used for describing the reaction of a chemical sample to applied substrate concentration. A method is suggested to reduce the amount of bias in the estimate of binding affinity. Finally there is a prospective discussion of other possible uses of the technique, including general comparisons of sets of alternative models in frequentist and Bayesian settings, applications to robust estimation and extensions beyond maximum likelihood estimates.