Results 1  10
of
49
Monte Carlo Statistical Methods
, 1998
"... This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. ..."
Abstract

Cited by 900 (23 self)
 Add to MetaCart
This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. 1983). 5.5.5 ] PROBLEMS 211
The effects of random and discrete sampling when estimating continuoustime diffusions
 ECONOMETRICA
, 2003
"... Highfrequency financial data are not only discretely sampled in time but the time separating successive observations is often random. We analyze the consequences of this dual feature of the data when estimating a continuoustime model. In particular, we measure the additional effects of the randomn ..."
Abstract

Cited by 49 (10 self)
 Add to MetaCart
Highfrequency financial data are not only discretely sampled in time but the time separating successive observations is often random. We analyze the consequences of this dual feature of the data when estimating a continuoustime model. In particular, we measure the additional effects of the randomness of the sampling intervals over and beyond those due to the discreteness of the data. We also examine the effect of simply ignoring the sampling randomness. We find that in many situations the randomness of the sampling has a larger impact than the discreteness of the data.
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...
Maximum likelihood estimation of latent affine processes, Working paper
 Processes, forthcoming, Review of Financial Studies
, 2006
"... This article develops a direct filtrationbased maximum likelihood methodology for estimating the parameters and realizations of latent affine processes. Filtration is conducted in the transform space of characteristic functions, using a version of Bayes ’ rule for recursively updating the joint cha ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
This article develops a direct filtrationbased maximum likelihood methodology for estimating the parameters and realizations of latent affine processes. Filtration is conducted in the transform space of characteristic functions, using a version of Bayes ’ rule for recursively updating the joint characteristic function of latent variables and the data conditional upon past data. An application to daily stock market returns over 195396 reveals substantial divergences from EMMbased estimates; in particular, more substantial and timevarying jump risk. The implications for pricing stock index options are examined. 3 “The Lion in Affrik and the Bear in Sarmatia are Fierce, but Translated into a Contrary Heaven, are of less Strength and Courage.” Jacob Ziegler; translated by Richard Eden (1555) While models proposing timevarying volatility of asset returns have been around for thirty years, it has proven extraordinarily difficult to estimate the parameters of the underlying volatility process,
Statistical methods for polyploid radiation hybrid mapping
 Genome Research
, 1995
"... service ..."
Confidence intervals for a binomial proportion and asymptotic expansions
 Ann. Statist
, 2002
"... We address the classic problem of interval estimation of a binomial proportion. The Wald interval ˆp ± z α/2n −1/2 ( ˆp(1 −ˆp)) 1/2 is currently in near universal use. We first show that the coverage properties of the Wald interval are persistently poor and defy virtually all conventional wisdom. We ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We address the classic problem of interval estimation of a binomial proportion. The Wald interval ˆp ± z α/2n −1/2 ( ˆp(1 −ˆp)) 1/2 is currently in near universal use. We first show that the coverage properties of the Wald interval are persistently poor and defy virtually all conventional wisdom. We then proceed to a theoretical comparison of the standard interval and four additional alternative intervals by asymptotic expansions of their coverage probabilities and expected lengths. The four additional interval methods we study in detail are the scoretest interval (Wilson), the likelihoodratiotest interval, a Jeffreys prior Bayesian interval and an interval suggested by Agresti and Coull. The asymptotic expansions for coverage show that the first three of these alternative methods have coverages that fluctuate about the nominal value, while the Agresti– Coull interval has a somewhat larger and more nearly conservative coverage function. For the five interval methods we also investigate asymptotically their average coverage relative to distributions for p supported within (0, 1). In terms of expected length, asymptotic expansions show that the Agresti– Coull interval is always the longest of these. The remaining three are rather comparable and are shorter than the Wald interval except for p near 0 or 1. These analytical calculations support and complement the findings and the recommendations in Brown, Cai and DasGupta (Statist. Sci. (2001) 16
Asymptotic Performance Analysis of Bayesian Object Recognition
 IEEE Transactions of Information Theory
, 1998
"... This paper analyzes the performance of Bayesian object recognition algorithms in the context of deformable templates. Rigid CAD surface models represent the underlying targets; lowdimensional matrix Lie groups (rotation and translation) extend them to the particular instance of pose and position. F ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
This paper analyzes the performance of Bayesian object recognition algorithms in the context of deformable templates. Rigid CAD surface models represent the underlying targets; lowdimensional matrix Lie groups (rotation and translation) extend them to the particular instance of pose and position. For a target ff, I ff represents its templates and sI ff is the target template at the pose/location denoted by the parameter s. The remote sensors observing the objects are modeled by the projective transformation T , that is, T sI ff is the signature of target ff at pose s when viewed by the sensor T . The observations I D are modeled as a random fields with mean T sI ff . In a Bayesian approach, object recognition and pose estimation are basically optimizations for a given cost function related to the posterior. Recognition performance is analyzed through probability of error: given a target ff 0 at pose s 0 what is the probability of it being recognized as ff 1 . Asymptotic ex...
Statistic analysis of stochastic resonance with ergodic diffusion noise
"... A subthreshold signal is transmitted through a channel and may be detected when some noise – with known structure and proportional to some level – is added to the data. There is an optimal noise level, called stochastic resonance, that corresponds to the highest Fisher information in the problem of ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
A subthreshold signal is transmitted through a channel and may be detected when some noise – with known structure and proportional to some level – is added to the data. There is an optimal noise level, called stochastic resonance, that corresponds to the highest Fisher information in the problem of estimation of the signal. As noise we consider an ergodic diffusion process and the asymptotic is considered as time goes to infinity. We propose consistent estimators of the subthreshold signal and we solve further a problem of hypotheses testing. We also discuss evidence of stochastic resonance for both estimation and hypotheses testing problems via examples.
Impact of Jumps on Returns and Realised Variances: Econometric analysis of timedeformed Lévy processes
 Journal of Econometrics
, 2004
"... In order to assess the e#ect of jumps on realised variance calculations, we study some of the econometric properties of timechanged Levy processes. We show that in general realised variance is an inconsistent estimator of the timechange, however we can derive the second order properties of real ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
In order to assess the e#ect of jumps on realised variance calculations, we study some of the econometric properties of timechanged Levy processes. We show that in general realised variance is an inconsistent estimator of the timechange, however we can derive the second order properties of realised variances and use these to estimate the parameters of such models. Our analytic results give a first indication of the degrees of inconsistency of realised variance as an estimator of the timechange in the nonBrownian case. Further, our results suggest volatility is even more predictable than has been shown by the recent econometric work on realised variance.
Bayesian Maximum a Posteriori Multiple Testing Procedure
 Sankhya
, 2006
"... We consider a Bayesian approach to multiple hypothesis testing. A hierarchical prior model is based on imposing a prior distribution π(k) on the number of hypotheses arising from alternatives (false nulls). We then apply the maximum a posteriori (MAP) rule to find the most likely configuration of nu ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We consider a Bayesian approach to multiple hypothesis testing. A hierarchical prior model is based on imposing a prior distribution π(k) on the number of hypotheses arising from alternatives (false nulls). We then apply the maximum a posteriori (MAP) rule to find the most likely configuration of null and alternative hypotheses. The resulting MAP procedure and its closely related stepup and stepdown versions compare ordered Bayes factors of individual hypotheses with a sequence of critical values depending on the prior. We discuss the relations between the proposed MAP procedure and the existing frequentist and Bayesian counterparts. A more detailed analysis is given for the normal data, where we show, in particular, that by choosing a specific π(k), the MAP procedure can mimic several known familywise error (FWE) and false discovery rate (FDR) controlling procedures. The performance of MAP procedures is illustrated on a simulated example. AMS (2000) subject classification. Primary 62F15, 62F03.