Results 1  10
of
44
MulticastBased Inference of NetworkInternal Characteristics: Accuracy of Packet Loss Estimation
 IEEE Transactions on Information Theory
, 1998
"... We explore the use of endtoend multicast traffic as measurement probes to infer networkinternal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique explo ..."
Abstract

Cited by 252 (31 self)
 Add to MetaCart
We explore the use of endtoend multicast traffic as measurement probes to infer networkinternal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique exploits the inherent correlation between such observations to infer the performance of paths between branch points in the multicast tree spanning the probe source and its receivers. We evaluate through analysis and simulation the accuracy of our estimator under a variety of network conditions. In particular, we report on the error between inferred loss rates and actual loss rates as we vary the network topology, propagation delay, packet drop policy, background traffic mix, and probe traffic type. In all but one case, estimated losses and probe losses agree to within 2 percent on average. We feel this accuracy is enough to reliably identify congested links in a widearea internetwork. KeywordsInternet performance, endtoend measurements, Maximum Likelihood Estimator, tomography I.
Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Abstract

Cited by 96 (28 self)
 Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
Information Theoretic Approaches to Inference in Moment Condition Models
 Econometrica
, 1998
"... Onestep efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994). These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Onestep efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994). These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys all the moment restrictions at the parameter estimates. In this paper we consider an alternative KLIC motivated weighting and show how it and similar discrete reweightings define a class of unconstrained optimization problems which includes GMM as a special case. Such KLIC motivated reweightings introduce M auxiliary `tilting' parameters, where M is the number of moments; parameter and overidentification hypotheses can be recast in terms of these tilting parameters. Such tests, when appropriately conditioned on the estimates of the original parameters, are often startlingly more effective than their conventional counterparts. This is apparently due to the local ancillarity of the original parameters for the tilting parameters. 1.
Optimal Inference in Regression Models with Nearly Integrated Regressors
, 2004
"... This paper considers the problem of conducting inference on the regression coefficient in a bivariate regression model with a highly persistent regressor. Gaussian power envelopes are obtained for a class of testing procedures satisfying a conditionality restriction. In addition, the paper proposes ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
This paper considers the problem of conducting inference on the regression coefficient in a bivariate regression model with a highly persistent regressor. Gaussian power envelopes are obtained for a class of testing procedures satisfying a conditionality restriction. In addition, the paper proposes feasible testing procedures that attain these Gaussian power envelopes whether or not the innovations of the regression model are normally distributed.
ANOVA FOR DIFFUSIONS AND ITO PROCESSES
 SUBMITTED TO THE ANNALS OF STATISTICS
"... Ito processes are the most common form of continuous semimartingales, and include diffusion processes. The paper is concerned with the nonparametric regression relationship between two such Ito processes. We are interested in the quadratic variation (integrated volatility) of the residual in this re ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
Ito processes are the most common form of continuous semimartingales, and include diffusion processes. The paper is concerned with the nonparametric regression relationship between two such Ito processes. We are interested in the quadratic variation (integrated volatility) of the residual in this regression, over a unit of time (such as a day). A main conceptual finding is that this quadratic variation can be estimated almost as if the residual process were observed, the difference being that there is also a bias which is of the same asymptotic order as the mixed normal error term. The proposed methodology, “ANOVA for diffusions and Ito processes”, can be used to measure the statistical quality of a parametric model, and, nonparametrically, the appropriateness of a oneregressor model in general. On the other hand, it also helps quantify and characterize the trading (hedging) error in the case of financial applications.
Asymptotics and the theory of inference
, 2003
"... Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution is unavailable. More importantly, asymptotic analysis can also provide insight into the inference process itself, suggesting what information is available and how this infor ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
Asymptotic analysis has always been very useful for deriving distributions in statistics in cases where the exact distribution is unavailable. More importantly, asymptotic analysis can also provide insight into the inference process itself, suggesting what information is available and how this information may be extracted. The development of likelihood inference over the past twentysome years provides an illustration of the interplay between techniques of approximation and statistical theory.
Minimax emission computed tomography using high resolution anatomical side information and Bspline models
 IEEE Trans. Info. Theory
, 1999
"... Abstract—In this paper a minimax methodology is presented for combining information from two imaging modalities having different intrinsic spatial resolutions. The focus application is emission computed tomography (ECT), a lowresolution modality for reconstruction of radionuclide tracer density, wh ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Abstract—In this paper a minimax methodology is presented for combining information from two imaging modalities having different intrinsic spatial resolutions. The focus application is emission computed tomography (ECT), a lowresolution modality for reconstruction of radionuclide tracer density, when supplemented by highresolution anatomical boundary information extracted from a magnetic resonance image (MRI) of the same imaging volume. The MRI boundary within the twodimensional (2D) slice of interest is parameterized by a closed planar curve. The Cramèr–Rao (CR) lower bound is used to analyze estimation errors for different boundary shapes. Under a spatially inhomogeneous Gibbs field model for the tracer density a representation for the minimax MRIenhanced tracer density estimator is obtained. It is shown that the estimator is asymptotically equivalent to a penalized maximum likelihood (PML) estimator with resolutionselective Gibbs penalty. Quantitative comparisons are presented using the iterative space alternating generalized expectation maximization (SAGEEM) algorithm to implement the PML estimator with and without minimax weight averaging. Index Terms—Asymptotic marginalization, Cramèr–Rao (CR) bound, expectation maximization (EM) algorithm, Fisher information, multiresolution imaging, penalized maximum likelihood, planar curves, spatially variant Gibbs field model. I.
Likelihood Asymptotics
, 1998
"... The paper gives an overview of modern likelihood asymptotics with emphasis on results and applicability. Only parametric inference in wellbehaved models is considered and the theory discussed leads to highly accurate asymptotic tests for general smooth hypotheses. The tests are refinements of the u ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The paper gives an overview of modern likelihood asymptotics with emphasis on results and applicability. Only parametric inference in wellbehaved models is considered and the theory discussed leads to highly accurate asymptotic tests for general smooth hypotheses. The tests are refinements of the usual asymptotic likelihood ratio tests, and for onedimensional hypotheses the test statistic is known as r , introduced by BarndorffNielsen. Examples illustrate the applicability and accuracy as well as the complexity of the required computations. Modern likelihood asymptotics has developed by merging two lines of research: asymptotic ancillarity is the basis of the statistical development, and saddlepoint approximations or Laplacetype approximations have simultaneously developed as the technical foundation. The main results and techniques of these two lines will be reviewed, and a generalization to multidimensional tests is developed. In the final part of the paper further problems and ...
A Gaussian calculus for inference from high frequency data
, 2006
"... In the econometric literature of high frequency data, it is often assumed that one can carry out inference conditionally on the underlying volatility processes. In other words, conditionally Gaussian systems are considered. This is often referred to as the assumption of “no leverage effect”. This is ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
In the econometric literature of high frequency data, it is often assumed that one can carry out inference conditionally on the underlying volatility processes. In other words, conditionally Gaussian systems are considered. This is often referred to as the assumption of “no leverage effect”. This is often a reasonable thing to do, as general estimators and results can often be conjectured from considering the conditionally Gaussian case. The purpose of this paper is to try to give some more structure to the things one can do with the Gaussian assumption. We shall argue in the following that there is a whole treasure chest of tools that can be brought to bear on high frequency data problems in this case. We shall in particular consider approximations involving locally constant volatility processes, and develop a general theory for this approximation. As applications of the theory, we propose an improved estimator of quarticity, an ANOVA for processes with multiple regressors, and an estimator for error bars on the HayashiYoshida estimator of quadratic covariation Some key words and phrases: consistency, cumulants, contiguity, continuity, discrete observation, efficiency, Itô process, likelihood inference, realized volatility, stable convergence
A LargeSample Model Selection Criterion Based on Kullback's Symmetric Divergence
 Statistical and Probability Letters
, 1999
"... The Akaike information criterion, AIC, is a widely known and extensively used tool for statistical model selection. AIC serves as an asymptotically unbiased estimator of a variant of Kullback's directed divergence between the true model and a fitted approximating model. The directed divergence is an ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The Akaike information criterion, AIC, is a widely known and extensively used tool for statistical model selection. AIC serves as an asymptotically unbiased estimator of a variant of Kullback's directed divergence between the true model and a fitted approximating model. The directed divergence is an asymmetric measure of separation between two statistical models, meaning that an alternate directed divergence may be obtained by reversing the roles of the two models in the definition of the measure. The sum of the two directed divergences is Kullback's symmetric divergence. Since the symmetric divergence combines the information in two related though distinct measures, it functions as a gauge of model disparity which is arguably more sensitive than either of its individual components. With this motivation, we propose a model selection criterion which serves as an asymptotically unbiased estimator of a variant of the symmetric divergence between the true model and a fitted approximating model. We examine the performance of the criterion relative to other wellknown criteria in a simulation study. Keywords: AIC, Akaike information criterion, Idivergence, Jdivergence, KullbackLeibler information, relative entropy. Correspondence: Joseph E. Cavanaugh, Department of Statistics, 222 Math Sciences Bldg., University of Missouri, Columbia, MO 65211. y This research was supported by NSF grant DMS9704436. 1.