Results 1  10
of
142
Solving illconditioned and singular linear systems: A tutorial on regularization
 SIAM Rev
, 1998
"... Abstract. It is shown that the basic regularization procedures for finding meaningful approximate solutions of illconditioned or singular linear systems can be phrased and analyzed in terms of classical linear algebra that can be taught in any numerical analysis course. Apart from rewriting many kn ..."
Abstract

Cited by 109 (3 self)
 Add to MetaCart
Abstract. It is shown that the basic regularization procedures for finding meaningful approximate solutions of illconditioned or singular linear systems can be phrased and analyzed in terms of classical linear algebra that can be taught in any numerical analysis course. Apart from rewriting many known results in a more elegant form, we also derive a new twoparameter family of merit functions for the determination of the regularization parameter. The traditional merit functions from generalized cross validation (GCV) and generalized maximum likelihood (GML) are recovered as special cases.
Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics
 Econometrica
, 2004
"... This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing the ..."
Abstract

Cited by 65 (0 self)
 Add to MetaCart
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities.
A Simple General Formula for Tail Probabilities for Frequentist and Bayesian Inference
 Biometrika
, 1998
"... this paper with approximations of the form p(/) = \Phi 1 (r; Q) = \Phi(r) + OE(r)(r \Gamma Q ) (1:5) p(/) = \Phi 2 (r; Q) = \Phifr \Gamma r log(r=Q)g : (1:6) where OE is the standard normal density function. The primary objective is a simple and widely applicable formula for Q that ensures ..."
Abstract

Cited by 55 (28 self)
 Add to MetaCart
this paper with approximations of the form p(/) = \Phi 1 (r; Q) = \Phi(r) + OE(r)(r \Gamma Q ) (1:5) p(/) = \Phi 2 (r; Q) = \Phifr \Gamma r log(r=Q)g : (1:6) where OE is the standard normal density function. The primary objective is a simple and widely applicable formula for Q that ensures the O(n ) accuracy of the p value p(/). The frequentist version is recorded in (3.6) and is a generalisation of (1.1) and (2.6). The Bayesian version is also recorded in (3.6) and is a generalisation of (1.2) and (2.9)
Approximately unbiased tests of regions using multistepmultiscale bootstrap resampling
 Annals of Statistics
, 2004
"... Approximately unbiased tests based on bootstrap probabilities are considered for the exponential family of distributions with unknown expectation parameter vector, where the null hypothesis is represented as an arbitraryshaped region with smooth boundaries. This problem has been discussed previously ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Approximately unbiased tests based on bootstrap probabilities are considered for the exponential family of distributions with unknown expectation parameter vector, where the null hypothesis is represented as an arbitraryshaped region with smooth boundaries. This problem has been discussed previously in Efron and Tibshirani [Ann. Statist. 26 (1998) 1687–1718], and a corrected pvalue with secondorder asymptotic accuracy is calculated by the twolevel bootstrap of Efron, Halloran and Holmes [Proc. Natl. Acad. Sci. U.S.A. 93 (1996) 13429–13434] based on the ABC bias correction of Efron [J. Amer. Statist. Assoc. 82 (1987) 171–185]. Our argument is an extension of their asymptotic theory, where the geometry, such as the signed distance and the curvature of the boundary, plays an important role. We give another calculation of the corrected pvalue without finding the “nearest point ” on the boundary to the observation, which is required in the twolevel bootstrap and is an implementational burden in complicated problems. The key idea is to alter the sample size of the replicated dataset from that of the observed dataset.
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...
Time series analysis via mechanistic models. In review; prepublished at arxiv.org/abs/0802.0021
, 2008
"... The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consi ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
(Show Context)
The purpose of time series analysis via mechanistic models is to reconcile the known or hypothesized structure of a dynamical system with observations collected over time. We develop a framework for constructing nonlinear mechanistic models and carrying out inference. Our framework permits the consideration of implicit dynamic models, meaning statistical models for stochastic dynamical systems which are specified by a simulation algorithm to generate sample paths. Inference procedures that operate on implicit models are said to have the plugandplay property. Our work builds on recently developed plugandplay inference methodology for partially observed Markov models. We introduce a class of implicitly specified Markov chains with stochastic transition rates, and we demonstrate its applicability to open problems in statistical inference for biological systems. As one example, these models are shown to give a fresh perspective on measles transmission dynamics. As a second example, we present a mechanistic analysis of cholera incidence data, involving interaction between two competing strains of the pathogen Vibrio cholerae. 1. Introduction. A
On quantum statistical inference
 J. Roy. Statist. Soc. B
, 2001
"... [Read before The Royal Statistical Society at a meeting organized by the Research Section ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
[Read before The Royal Statistical Society at a meeting organized by the Research Section
Covariate balance in simple, stratified and clustered comparative studies,” Statist
 Sci
, 2008
"... Abstract. In randomized experiments, treatment and control groups should be roughly the same—balanced—in their distributions of pretreatment variables. But how nearly so? Can descriptive comparisons meaningfully be paired with significance tests? If so, should there be several such tests, one for ea ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
Abstract. In randomized experiments, treatment and control groups should be roughly the same—balanced—in their distributions of pretreatment variables. But how nearly so? Can descriptive comparisons meaningfully be paired with significance tests? If so, should there be several such tests, one for each pretreatment variable, or should there be a single, omnibus test? Could such a test be engineered to give easily computed pvalues that are reliable in samples of moderate size, or would simulation be needed for reliable calibration? What new concerns are introduced by random assignment of clusters? Which tests of balance would be optimal? To address these questions, Fisher’s randomization inference is applied to the question of balance. Its application suggests the reversal of published conclusions about two studies, one clinical and the other a field experiment in political participation. Key words and phrases: Cluster, contiguity, community intervention, group randomization, randomization inference, subclassification. 1.
Attributing Effects to A Cluster Randomized GetOutTheVote Campaign.” Working Paper
, 2008
"... In a landmark study of political participation, A. Gerber and D. Green (2000) experimentally compared the effectiveness of various getoutthevote interventions. The study was wellpowered, conducted not in a lab but under field conditions, in the midst of a Congressional campaign; it used random ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
(Show Context)
In a landmark study of political participation, A. Gerber and D. Green (2000) experimentally compared the effectiveness of various getoutthevote interventions. The study was wellpowered, conducted not in a lab but under field conditions, in the midst of a Congressional campaign; it used random assignment, in a field where randomization had been rare. As Fisher (1935) showed long ago, inferences from randomized designs can be essentially assumptionfree, making them uniquely suited to settle scientific debates. This study, however, prompted a contentious new debate after Imai (2005) tested and rejected the randomization model for Gerber and Green’s data. His alternate methodology reaches substantive conclusions contradicting those of Gerber and Green. It has since become clear that the experiment’s apparent lapses can be ascribed to clustered treatment assignment, rather than failures of randomization; it had randomized households, not individuals. What remains to be clarified is how this structure could have been accommodated by an analysis as sparing with