Results 1  10
of
15
Global, voxel, and cluster tests, by theory and permutation, for a difference between two groups of univariate analysis of ERPs/ERFs I: Review 1725 structural MR images of the brain
 IEEE Transactions on Medical Imaging
, 1999
"... Abstract—We describe almost entirely automated procedures for estimation of global, voxel, and clusterlevel statistics to test the null hypothesis of zero neuroanatomical difference between two groups of structural magnetic resonance imaging (MRI) data. Theoretical distributions under the null hypo ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
Abstract—We describe almost entirely automated procedures for estimation of global, voxel, and clusterlevel statistics to test the null hypothesis of zero neuroanatomical difference between two groups of structural magnetic resonance imaging (MRI) data. Theoretical distributions under the null hypothesis are available for 1) global tissue class volumes; 2) standardized linear model [analysis of variance (ANOVA and ANCOVA)] coefficients estimated at each voxel; and 3) an area of spatially connected clusters generated by applying an arbitrary threshold to a twodimensional (2D) map of normal statistics at voxel level. We describe novel methods for economically ascertaining probability distributions under the null hypothesis, with fewer assumptions, by permutation of the observed data. Nominal Type I error control by permutation testing is generally excellent; whereas theoretical distributions may be over conservative. Permutation has the additional advantage that it can be used to test any statistic of interest, such as the sum of suprathreshold voxel statistics in a cluster (or cluster mass), regardless of its theoretical tractability under the null hypothesis. These issues are illustrated by application to MRI data acquired from 18 adolescents with hyperkinetic disorder and 16 control subjects matched for age and gender. Index Terms — Brain, imaging/mapping, probability distributions, statistics.
Modelling selection harvesting in tropical rain forests
 JOURNAL OF TROPICAL FOREST SCIENCE
, 1989
"... Long term yield estimates for natural forests require a harvesting model to enable future yields to be estimated reliably. The model should predict the felled stems, the proportion of these which are merchantable, and any damage to the residual stand. Regression analyses was used to develop a model ..."
Abstract

Cited by 11 (11 self)
 Add to MetaCart
Long term yield estimates for natural forests require a harvesting model to enable future yields to be estimated reliably. The model should predict the felled stems, the proportion of these which are merchantable, and any damage to the residual stand. Regression analyses was used to develop a model of current logging practice in the rain forests of north Queensland. Logistic functions predict the probability of any tree being marked for logging, the probability of a felled tree being merchantable, and the probability of any tree in the residual stand being damaged by logging. Important predictor variables included tree species and size, merchantable basal area, basal area logged, logging history, and topography. There was no evidence to suggest that soil type or site quality influenced current treemarking practice. The approach is applicable to other mixed forest types managed for selection logging.
Bayesian Posterior Comprehension via Message from Monte Carlo
, 2003
"... We discuss the problem of producing an epitome, or brief summary, of a Bayesian posterior distribution  and then investigate a general solution based on the Minimum Message Length (MML) principle. Clearly, the optimal criterion for choosing such an epitome is determined by the epitome's intended us ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We discuss the problem of producing an epitome, or brief summary, of a Bayesian posterior distribution  and then investigate a general solution based on the Minimum Message Length (MML) principle. Clearly, the optimal criterion for choosing such an epitome is determined by the epitome's intended use. The interesting general case is where this use is unknown since, in order to be practical, the choice of epitome criterion becomes subjective. We identify a number of desirable properties that an epitome could have  facilitation of point estimation, human comprehension, and fast approximation of posterior expectations. We call these the properties of Bayesian Posterior Comprehension and show that the Minimum Message Length principle can be viewed as an epitome criterion that produces epitomes having these properties. We then present and extend Message from Monte Carlo as a means for constructing instantaneous Minimum Message Length codebooks (and epitomes) using Markov Chain Monte Carlo methods. The Message from Monte Carlo methodology is illustrated for binary regression, generalised linear model, and multiple changepoint problems.
Applications and Extensions of a Technique for Estimator Densities
"... Abstract—Applications are given of a formula for the exact probability density function of the maximum likelihood estimates of a statistical model, where the data generating model is allowed to differ from the estimation model. The main examples are supported by simulation experiments. Curved expone ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—Applications are given of a formula for the exact probability density function of the maximum likelihood estimates of a statistical model, where the data generating model is allowed to differ from the estimation model. The main examples are supported by simulation experiments. Curved exponential families are investigated, for which an approach is described that can be used in many practical situations. The distribution of a maximum likelihood estimator in exponential regression is developed. Nonlinear regression is then considered, with an example of a model discrepancy situation arising in ELISA immunoassays and similar biochemical titrations. An incorrect logistic model is specified for a titration curve that is used for describing the reaction of a chemical sample to applied substrate concentration. A method is suggested to reduce the amount of bias in the estimate of binding affinity. Finally there is a prospective discussion of other possible uses of the technique, including general comparisons of sets of alternative models in frequentist and Bayesian settings, applications to robust estimation and extensions beyond maximum likelihood estimates.
Weighted averaging, logistic regression and the Gaussian response model*
"... The indicator value and ecological amplitude of a species with respect to a quantitative environmental variable can be estimated from data on species occurrence and environment. A simple weighted averaging (WA) method for estimating these parameters is compared by simulation with the more elaborate ..."
Abstract
 Add to MetaCart
The indicator value and ecological amplitude of a species with respect to a quantitative environmental variable can be estimated from data on species occurrence and environment. A simple weighted averaging (WA) method for estimating these parameters is compared by simulation with the more elaborate method of Gaussian logistic regression (GLR), a form of the generalized linear model which fits a Gaussianlike species response curve to presenceabsence data. The indicator value and the ecological amplitude are expressed by two parameters of this curve, termed the optimum and the tolerance, respectively. When a species is rare and has a narrow ecological amplitude or when the distribution of quadrats along the environmental variable is reasonably even over the species ' range, and the number of quadrats is small then WA is shown to approach GLR in efficiency. Otherwise WA may give misleading results. GLR is therefore preferred as a practical method for summarizing species ' distributions along environmental gradients. Formulas are given to calculate species optima and tolerances (with their standard errors), and a confidence interval for the optimum from the GLR output of standard statistical packages.
BUGS*Examples  Version 0.5 Volume 2
, 1996
"... Introduction and Disclaimer These worked examples illustrate the use of the BUGS language and sampler in a wide range of problems. They contain a number of useful "tricks", but are certainly not exhaustive of the models that may be analysed. We emphasise that all the results for these examples have ..."
Abstract
 Add to MetaCart
Introduction and Disclaimer These worked examples illustrate the use of the BUGS language and sampler in a wide range of problems. They contain a number of useful "tricks", but are certainly not exhaustive of the models that may be analysed. We emphasise that all the results for these examples have been derived in the most naive way: in general a burnin of 500 iterations and a single long run of 1000 iterations. This is not recommended as a general technique: no tests of convergence have been carried out, and traces of the estimates have not even been plotted. However, comparisons with published results have been made where possible. Times have been measured on a 60 MHz superSPARC: a 60 MHz Pentium PC appears to be about 4 times slower, and a 30 MHz superSPARC about 2 times slower. Users are warned to be extremely careful about assuming convergence, especially when using complex models including errors in variables, crossed random effects and intrinsic p
Examples Volume 2 (version
"... Introduction and Disclaimer These worked examples illustrate the use of the BUGS language and sampler in a wide range of problems. They contain a number of useful "tricks", but are certainly not exhaustive of the models that may be analysed. We emphasise that all the results for these examples have ..."
Abstract
 Add to MetaCart
Introduction and Disclaimer These worked examples illustrate the use of the BUGS language and sampler in a wide range of problems. They contain a number of useful "tricks", but are certainly not exhaustive of the models that may be analysed. We emphasise that all the results for these examples have been derived in the most naive way: in general a burnin of 500 iterations and a single long run of 1000 iterations. This is not recommended as a general technique: no tests of convergence have been carried out, and traces of the estimates have not even been plotted. However, comparisons with published results have been made where possible. Times have been measured on a 60 MHz superSPARC: a 60 MHz Pentium PC appears to be about 4 times slower, and a 30 MHz superSPARC about 2 times slower. Users are warned to be extremely careful about assuming convergence, especially when using complex models including errors in variables, crossed random effects and intrinsi
A Technique for Estimator Densities applied to Exponential Regression, Nonlinear Regression models and Biochemical Titration Curves
, 2008
"... Abstract—Applications are given of a formula for the exact probability density function of the maximum likelihood estimates of a statistical model, where the data generating model is allowed to differ from the estimation model. The main examples are supported by simulation experiments. Curved expone ..."
Abstract
 Add to MetaCart
Abstract—Applications are given of a formula for the exact probability density function of the maximum likelihood estimates of a statistical model, where the data generating model is allowed to differ from the estimation model. The main examples are supported by simulation experiments. Curved exponential families are investigated, for which an approach is described that can be used in many practical situations. The distribution of a maximum likelihood estimator in exponential regression is developed. Nonlinear regression is then considered, with an example of a model discrepancy situation arising in ELISA immunoassays and similar biochemical titrations. An incorrect logistic model is specified for a titration curve that is used for describing the reaction of a chemical sample to applied substrate concentration. A method is suggested to reduce the amount of bias in the estimate of binding affinity. There is a discussion of other possible uses for the technique.
Pattern Recognition in Credit Scoring Analysis
"... Recognizing and foreseeing which credit clients will be "good orbad payers " is an important and di cult task for bank institutions and credit protection services. Using data from approximately 10,000 clients obtained from a large private Brazilian bank, we present a methodology to perform the credi ..."
Abstract
 Add to MetaCart
Recognizing and foreseeing which credit clients will be "good orbad payers " is an important and di cult task for bank institutions and credit protection services. Using data from approximately 10,000 clients obtained from a large private Brazilian bank, we present a methodology to perform the credit scoring analysis. The methodology proposed is divided into 2 stages: statistical data analysis and the use of a model to perform the Pattern Recognition, discriminating the two groups mentioned earlier. Keywords: Pattern Recognition, Credit Scoring, Multivariate Analysis.