Results 1  10
of
251
A HeteroskedasticityConsistent Covariance Matrix Estimator And A Direct Test For Heteroskedasticity
, 1980
"... This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator ..."
Abstract

Cited by 1243 (3 self)
 Add to MetaCart
This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator to those of the usual covariance estimator, one obtains a direct test for heteroskedasticity, since in the absence of heteroskedasticity, the two estimators will be approximately equal, but will generally diverge otherwise. The test has an appealing least squares interpretation
Proof of a Fundamental Result in SelfSimilar Traffic Modeling
 COMPUTER COMMUNICATION REVIEW
, 1997
"... We state and prove the following key mathematical result in selfsimilar traffic modeling: the superposition of many ON/OFF sources (also known as packet trains) with strictly alternating ON and OFFperiods and whose ONperiods or OFFperiods exhibit the Noah Effect (i.e., have high variability or ..."
Abstract

Cited by 206 (8 self)
 Add to MetaCart
We state and prove the following key mathematical result in selfsimilar traffic modeling: the superposition of many ON/OFF sources (also known as packet trains) with strictly alternating ON and OFFperiods and whose ONperiods or OFFperiods exhibit the Noah Effect (i.e., have high variability or infinite variance) can produce aggregate network traffic that exhibits the Joseph Effect (i.e., is selfsimilar or longrange dependent). There is, moreover, a simple relation between the parameters describing the intensities of the Noah Effect (high variability) and the Joseph Effect (selfsimilarity). This provides a simple physical explanation for the presence of selfsimilar traffic patterns in modern highspeed network traffic that is consistent with traffic measurements at the source level. We illustrate how this mathematical result can be combined with modern highperformance computing capabilities to yield a simple and efficient lineartime algorithm for generating selfsimilar traf...
Logarithmic Asymptotics For SteadyState Tail Probabilities In A SingleServer Queue
, 1993
"... We consider the standard singleserver queue with unlimited waiting space and the firstin firstout service discipline, but without any explicit independence conditions on the interarrival and service times. We find conditions for the steadystate waitingtime distribution to have smalltail asympt ..."
Abstract

Cited by 150 (14 self)
 Add to MetaCart
We consider the standard singleserver queue with unlimited waiting space and the firstin firstout service discipline, but without any explicit independence conditions on the interarrival and service times. We find conditions for the steadystate waitingtime distribution to have smalltail asymptotics of the form x  1 logP(W > x)  q * as x for q * > 0. We require only stationarity of the basic sequence of service times minus interarrival times and a Ga .. rtnerEllis condition for the cumulant generating function of the associated partial sums, i.e., n  1 log Ee qS n y(q) as n , plus regularity conditions on the decay rate function y. The asymptotic decay rate q * is the root of the equation y(q) = 0. This result in turn implies a corresponding asymptotic result for the steadystate workload in a queue with general nondecreasing input. This asymptotic result covers the case of multiple independent sources, so that it provides additional theoretical support for a concept of effective bandwidths for admission control in multiclass queues based on asymptotic decay rates.
Probability of error in MMSE multiuser detection
 IEEE Trans. Inform. Theory
, 1997
"... Abstract—Performance analysis of the minimummeansquareerror (MMSE) linear multiuser detector is considered in an environment of nonorthogonal signaling and additive white Gaussian noise. In particular, the behavior of the multipleaccess interference (MAI) at the output of the MMSE detector is exa ..."
Abstract

Cited by 144 (14 self)
 Add to MetaCart
Abstract—Performance analysis of the minimummeansquareerror (MMSE) linear multiuser detector is considered in an environment of nonorthogonal signaling and additive white Gaussian noise. In particular, the behavior of the multipleaccess interference (MAI) at the output of the MMSE detector is examined under various asymptotic conditions, including: large signaltonoise ratio; large near–far ratios; and large numbers of users. These results suggest that the MAIplusnoise contending with the demodulation of a desired user is approximately Gaussian in many cases of interest. For the particular case of two users, it is shown that the maximum divergence between the output MAIplusnoise and a Gaussian distribution having the same mean and variance is quite small in most cases of interest. It is further proved in this twouser case that the probability of error of the MMSE detector is better than that of the decorrelating linear detector for all values of normalized crosscorrelations not greater than I
Informationtheoretic asymptotics of Bayes methods
 IEEE Transactions on Information Theory
, 1990
"... AbstractIn the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian densit ..."
Abstract

Cited by 107 (10 self)
 Add to MetaCart
AbstractIn the absence of knowledge of the true density function, Bayesian models take the joint density function for a sequence of n random variables to be an average of densities with respect to a prior. We examine the relative entropy distance D,, between the true density and the Bayesian density and show that the asymptotic distance is (d/2Xlogn)+ c, where d is the dimension of the parameter vector. Therefore, the relative entropy rate D,,/n converges to zero at rate (logn)/n. The constant c, which we explicitly identify, depends only on the prior density function and the Fisher information matrix evaluated at the true parameter value. Consequences are given for density estimation, universal data compression, composite hypothesis testing, and stockmarket portfolio selection. 1.
An efficient Semiparametric Estimator for Binary Response Models
 Econometrica
, 1993
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 87 (2 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Gaussian processes for machine learning
 International Journal of Neural Systems
, 2004
"... Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. ..."
Abstract

Cited by 66 (15 self)
 Add to MetaCart
Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible nonparametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations [13, 78, 31]. The mathematical literature on GPs is large and often uses deep
THE USE OF SUBSERIES VALUES FOR ESTIMATING THE VARIANCE OF A GENERAL STATISTIC FROM A STATIONARY SEQUENCE
"... Let {Z.: _00< i <f=} be a strictly stationary amixing sequence 1...., Z) be a statistic computed from the n Without specifying the dependence model giving rise to the sequence {Z.}, 1. and without specifying the marginal distribution of Z., we address the question 1. of variance estimation for t n ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
Let {Z.: _00< i <f=} be a strictly stationary amixing sequence 1...., Z) be a statistic computed from the n Without specifying the dependence model giving rise to the sequence {Z.}, 1. and without specifying the marginal distribution of Z., we address the question 1. of variance estimation for t n For estimating the variance of t
Probability laws related to the Jacobi theta and Riemann zeta functions, and the Brownian excursions
 Bulletin (New series) of the American Mathematical Society
"... Abstract. This paper reviews known results which connect Riemann’s integral representations of his zeta function, involving Jacobi’s theta function and its derivatives, to some particular probability laws governing sums of independent exponential variables. These laws are related to onedimensional ..."
Abstract

Cited by 57 (11 self)
 Add to MetaCart
Abstract. This paper reviews known results which connect Riemann’s integral representations of his zeta function, involving Jacobi’s theta function and its derivatives, to some particular probability laws governing sums of independent exponential variables. These laws are related to onedimensional Brownian motion and to higher dimensional Bessel processes. We present some characterizations of these probability laws, and some approximations of Riemann’s zeta function which are related to these laws. Contents
Likelihood Ratio Gradient Estimation For Stochastic Recursions
 Communications of the ACM
, 1995
"... . In this paper, we develop mathematical machinery for verifying that a broad class of general state space Markov chains reacts smoothly to certain types of perturbations in the underlying transition structure. Our main result provides conditions under which the stationary probability measure of an ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
. In this paper, we develop mathematical machinery for verifying that a broad class of general state space Markov chains reacts smoothly to certain types of perturbations in the underlying transition structure. Our main result provides conditions under which the stationary probability measure of an ergodic Harris recurrent Markov chain is differentiable in a certain strong sense. The approach is based on likelihood ratio "changeofmeasure" arguments, and leads directly to a "likelihood ratio gradient estimator" that can be computed numerically. Keywords: Harris recurrent Markov chain, likelihood ratio, gradient estimation, regeneration. 1 The research of this author was supported by the U. S. Army Research Office under Contract No. DAAL0391G 0101 and by the National Science Foundation under Contract No. DDM9101580. 2 This author's research was supported by NSERCCanada grant No. OGP0110050 and FCARQu'ebec grant No. 93ER1654. 1. Introduction In this paper, we will study the cl...