Results 1  10
of
215
Modelbased Geostatistics
 Applied Statistics
, 1998
"... Conventional geostatistical methodology solves the problem of predicting the realised value of a linear functional of a Gaussian spatial stochastic process, S(x), based on observations Y i = S(x i ) + Z i at sampling locations x i , where the Z i are mutually independent, zeromean Gaussian random v ..."
Abstract

Cited by 198 (7 self)
 Add to MetaCart
Conventional geostatistical methodology solves the problem of predicting the realised value of a linear functional of a Gaussian spatial stochastic process, S(x), based on observations Y i = S(x i ) + Z i at sampling locations x i , where the Z i are mutually independent, zeromean Gaussian random variables. We describe two spatial applications for which Gaussian distributional assumptions are clearly inappropriate. The first concerns the assessment of residual contamination from nuclear weapons testing on a South Pacific island, in which the sampling method generates spatially indexed Poisson counts conditional on an unobserved spatially varying intensity of radioactivity; we conclude that a coventional geostatistical analysis oversmooths the data and underestimates the spatial extremes of the intensity. The second application provides a description of spatial variation in the risk of campylobacter infections relative to other enteric infections in part of North Lancashire and South C...
Approximate Bayesian computation: A nonparametric perspective
 Journal of the American Statistical Association
, 2010
"... Approximate Bayesian Computation is a family of likelihoodfree inference techniques that are wellsuited to models defined in terms of a stochastic generating mechanism. In a nutshell, Approximate Bayesian Computation proceeds by computing summary statistics sobs from the data and simulating synthe ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
(Show Context)
Approximate Bayesian Computation is a family of likelihoodfree inference techniques that are wellsuited to models defined in terms of a stochastic generating mechanism. In a nutshell, Approximate Bayesian Computation proceeds by computing summary statistics sobs from the data and simulating synthetic summary statistics for different values of the parameter Θ. The posterior distribution is then approximated by an estimator of the conditional density g(Θsobs). In this paper, we derive the asymptotic bias and variance of the standard estimators of the posterior distribution which are based on rejection sampling and linear adjustment. Additionally, we introduce an original estimator of the posterior distribution based on quadratic adjustment and we show that its bias contains a smaller number of terms than the estimator with linear adjustment. Although we find that the estimators with adjustment are not universally superior to the estimator based on rejection sampling, we find that they can achieve better performance when there is a nearly homoscedastic relationship between the summary statistics and the parameter of interest. Last, we present model selection in Approximate Bayesian Computation and provide asymptotic properties of two estimators of the model probabilities. As for parameter estimation, the asymptotic results raise the importance of the curse of dimensionality in Approximate Bayesian Computation. Performing numerical simulations in a simple normal model confirms that the estimators may be less efficient as the number of summary statistics increases. Supplemental materials containing the details of the proofs are available online.
Using multivariate adaptive regression splines to predict the distributions of New Zealand's freshwater
"... This paper deals with these observations as records of occurrence, although strictly speaking they are records of capture. We recognise the potential for confounding between detectability, capture method, and environmental relationships, but are generally satisfied that the main trends we model refl ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
This paper deals with these observations as records of occurrence, although strictly speaking they are records of capture. We recognise the potential for confounding between detectability, capture method, and environmental relationships, but are generally satisfied that the main trends we model reflect environmental effects on occurrence. A further step would be to include detectability in the models (e.g. MacKenzie et al., 2002), but this is a complex undertaking. Data were extracted for 15 diadromous species (Table 1) that occurred in the dataset with a capture frequency of 0.5% or above. The anguillids and Rhombosolea retiaria are catadromous, whereas Geotria australis and some retropinnid stocks are anadromous. The remaining species are amphidromous, meaning that adults remain resident in freshwater, but larval fish are carried out to sea where they spend a short period before migrating back to freshwater to grow to adulthood. The scope of freshwater habitat accessible Fig. 1 Sample sites from the New Zealand Freshwater Fish Database used in the analysis (open circles). Only rivers with an annual mean flow >10 m s are shown. Those shown in light grey were excluded from the analysis because of known significant downstream obstructions to fish migration to/from the sea
A survey of Monte Carlo algorithms for maximizing the likelihood of a twostage hierarchical model
, 2001
"... Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternati ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternative approach is to approximate the intractable integrals using Monte Carlo averages. Several dierent algorithms based on this idea have been proposed. In this paper we discuss the relative merits of simulated maximum likelihood, Monte Carlo EM, Monte Carlo NewtonRaphson and stochastic approximation. Key words and phrases : Eciency, Monte Carlo EM, Monte Carlo NewtonRaphson, Rate of convergence, Simulated maximum likelihood, Stochastic approximation All three authors partially supported by NSF Grant DMS0072827. 1 1
Adjust quality scores from alignment and improve sequencing accuracy
 Nucleic Acids Res
, 2004
"... accuracy ..."
(Show Context)
2002: Multilevel ordinal models for examination grades, submitted for publication
"... and linear models for continuous normal responses fitted. This is particularly prevalent in educational research. Generalised multilevel ordinal models for response categories are developed and contrasted in some respects with these normal models. Attention is given to the analysis of a large databa ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
and linear models for continuous normal responses fitted. This is particularly prevalent in educational research. Generalised multilevel ordinal models for response categories are developed and contrasted in some respects with these normal models. Attention is given to the analysis of a large database of the General Certificate of Education Advanced Level examinations in England and Wales. Ordinal models appear to have advantages in facilitating the study of institutional differences in more detail. Of particular importance is the flexibility offered by logit models with nonproportionally changing odds. Examples are given of the richer contrasts of institutional and subgroup differences that may be evaluated. Appropriate widely available software for this approach is also discussed. Keywords:
Naked molerats recruit colony mates to food sources
 Animal Behaviour
, 1996
"... Abstract. Naked molerats, Heterocephalus glaber, are eusocial, subterranean rodents that inhabit arid regions of northeastern Africa. They feed on bulbs and tubers that are patchily distributed. Nests are often located far from the nearest food source through a labyrinth of tunnels. Two captive col ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Naked molerats, Heterocephalus glaber, are eusocial, subterranean rodents that inhabit arid regions of northeastern Africa. They feed on bulbs and tubers that are patchily distributed. Nests are often located far from the nearest food source through a labyrinth of tunnels. Two captive colonies were studied to determine whether successful foragers recruit colony mates and, if so, how. Individuals that found a new food source typically gave a special vocalization on their way back to the nest and waved the food around once they got there. Colony mates preferentially visited the site where the initial forager had obtained food, often bypassing alternative sites containing the same type of food. Recruits preferred to use tunnels that had been traversed by the ‘scout ’ rather than an alternative pathway to the same food, regardless of whether they had to turn in the same or the opposite direction from that of the scout to enter the previously used pathway. Recruits preferred tunnels that the scout had recently used over tunnels that were recently traversed by another colony mate carrying the same type of food. When tunnels traversed by the scout were replaced with clean substitutes or with tunnels that were recently traversed by another colony mate carrying the same type of food, recruits showed no pathway preferences. Results indicate that naked molerats follow each other’s (odour) trails to food. There are intriguing parallels between the foraging recruitment system of H. glaber and those of other social
Do not logtransform count data
 Methods in Ecology and Evolution
, 2010
"... 1. Ecological count data (e.g. number of individuals or species) are often logtransformed to satisfy parametric test assumptions. 2. Apart from the fact that generalized linear models are better suited in dealing with count data, a logtransformation of counts has the additional quandary in how to ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
1. Ecological count data (e.g. number of individuals or species) are often logtransformed to satisfy parametric test assumptions. 2. Apart from the fact that generalized linear models are better suited in dealing with count data, a logtransformation of counts has the additional quandary in how to deal with zero observations. With just one zero observation (if this observation represents a sampling unit), the whole data set needs to be fudged by adding a value (usually 1) before transformation. 3. Simulating data from a negative binomial distribution, we compared the outcome of fitting models that were transformed in various ways (log, square root) with results from fitting models using quasiPoisson and negative binomial models to untransformed count data. 4. We found that the transformations performed poorly, except when the dispersion was small and the mean counts were large. The quasiPoisson and negative binomial models consistently performedwell, with little bias. 5. We recommend that count data should not be analysed by logtransforming it, but instead models based on Poisson and negative binomial distributions should be used. Keywords: generalized linear models, linear models, overdispersion, Poisson, transformation