Results 1 
5 of
5
Simulating Normalized Constants: From Importance Sampling to Bridge Sampling to Path Sampling
 Statistical Science, 13, 163–185. COMPARISON OF METHODS FOR COMPUTING BAYES FACTORS 435
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 148 (4 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Analysing PositiveValued Spatial Data: The Transformed Gaussian Model
 In geoENV – Geostatistics for Environmental Applications
, 2000
"... The Gaussian assumption is often inappropriate for analysing geostatistical data. In such cases transformations can be used in an attempt to get nearlyGaussian behaviour. In this paper we study the transformed Gaussian model, which includes an additional parameter corresponding to the BoxCox famil ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The Gaussian assumption is often inappropriate for analysing geostatistical data. In such cases transformations can be used in an attempt to get nearlyGaussian behaviour. In this paper we study the transformed Gaussian model, which includes an additional parameter corresponding to the BoxCox family of transformations. In particular we consider maximum likelihood estimation and minimum mean square error prediction for this model. As an example we apply the model to rainfall data. We discuss the limitations of the transformed Gaussian model, and suggest that it should be used primarily as a rst line of attack in dealing with nonGaussianity and nonlinearity, before proceeding to more complex models. 1. INTRODUCTION In geostatistics the Gaussian model can be considered as the reference model when dealing with continuous variables. Predictions can be easily derived from this model assumption. However, the Gaussian assumption is often inappropriate. Transformations can be used in an ...
A Bayesian Approach to Compare Observed Rainfall Data to Deterministic Simulations
 Environmetrics
, 2003
"... this paper we propose a method of calibration based on the exploration of the predictive distribution obtained from a statistical model linking observations with model output. Such a model has two byproducts. First, the spatial resolution of RCMs can still be too coarse for applications over small g ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this paper we propose a method of calibration based on the exploration of the predictive distribution obtained from a statistical model linking observations with model output. Such a model has two byproducts. First, the spatial resolution of RCMs can still be too coarse for applications over small geographical areas. Learning the statistical links between large and local scales allows to perform a stochastic downscaling of RCM output. Second, RCM outputs contain valuable information on a scale much larger than that provided by ground based data. Thus, simulations can be used to drive stochastic models which represent the statistical behavior of the ground based network, as a way of improving statistical predictions in locations without point measurements
NORMALIZING CONSTANTS
"... Abstract. Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the atten ..."
Abstract
 Add to MetaCart
Abstract. Computing (ratios of) normalizing constants of probability models is a fundamental computational problem for many statistical and scientific studies. Monte Carlo simulation is an effective technique, especially with complex and highdimensional models. This paper aims to bring to the attention of general statistical audiences of some effective methods originating from theoretical physics and at the same time to explore these methods from a more statistical perspective, through establishing theoretical connections and illustrating their uses with statistical problems. We show that the acceptance ratio method and thermodynamic integration are natural generalizations of importance sampling, which is most familiar to statistical audiences. The former generalizes importance sampling through the use of a single “bridge ” density and is thus a case of bridge sampling in the sense of Meng and Wong. Thermodynamic integration, which is also known in the numerical analysis literature as Ogata’s method for highdimensional integration, corresponds to the use of infinitely many and continuously connected bridges (and thus a “path”). Our path sampling formulation offers more flexibility and thus potential efficiency to thermodynamic integration, and the search of optimal paths turns out to have close connections with the Jeffreys prior density and the Rao and Hellinger distances between two densities. We provide an informative theoretical example as well as two empirical examples (involving 17 to 70dimensional integrations) to illustrate the potential and implementation of path sampling. We also discuss some open problems.
The Role of Nondetects in Statistical Analysis
, 1996
"... In environmental studies, scientists often report nondetect in place of an observed measurement when a sample measurement cannot be quantified. When the data is analyzed, some work must be done to combine the nondetects with the detected values. A variety of procedures exist for estimating parameter ..."
Abstract
 Add to MetaCart
In environmental studies, scientists often report nondetect in place of an observed measurement when a sample measurement cannot be quantified. When the data is analyzed, some work must be done to combine the nondetects with the detected values. A variety of procedures exist for estimating parameters using data which contains nondetects. This paper reviews the current methods used to account for nondetects in the analysis of environmental data and suggests areas of study which need further exploration. 1 Introduction In environmental studies, a measurement which cannot be quantified is often reported as a nondetect. For the most part, nondetects occur when a measurement falls below some predetermined detection limit (DL). There is no standard definition of detection limit, but it usually reflects limitations of the measurement devices used in the study. A common understanding is that the DL is the smallest value which the instrument can distinguish from background noise. The problems ...