Results 1  10
of
337
Higher criticism for detecting sparse heterogeneous mixtures
 Ann. Statist
, 2004
"... Higher Criticism, or secondlevel significance testing, is a multiple comparisons concept mentioned in passing by Tukey (1976). It concerns a situation where there are many independent tests of significance and one is interested in rejecting the joint null hypothesis. Tukey suggested to compare the ..."
Abstract

Cited by 86 (15 self)
 Add to MetaCart
Higher Criticism, or secondlevel significance testing, is a multiple comparisons concept mentioned in passing by Tukey (1976). It concerns a situation where there are many independent tests of significance and one is interested in rejecting the joint null hypothesis. Tukey suggested to compare the fraction of observed significances at a given αlevel to the expected fraction under the joint null, in fact he suggested to standardize the difference of the two quantities and form a zscore; the resulting zscore tests the significance of the body of significance tests. We consider a generalization, where we maximize this zscore over a range of significance levels 0 < α ≤ α0. We are able to show that the resulting Higher Criticism statistic is effective at resolving a very subtle testing problem: testing whether n normal means are all zero versus the alternative that a small fraction is nonzero. The subtlety of this ‘sparse normal means ’ testing problem can be seen from work of Ingster (1999) and Jin (2002), who studied such problems in great detail. In their studies, they identified an interesting range of cases where the small fraction of nonzero means is so
Using Randomization to Break the Curse of Dimensionality
 Econometrica
, 1997
"... Abstract: This paper introduces random versions of successive approximations and multigrid algorithms for computing approximate solutions to a class of finite and infinite horizon Markovian decision problems (MDPs). We prove that these algorithms succeed in breaking the “curse of dimensionality ” fo ..."
Abstract

Cited by 85 (0 self)
 Add to MetaCart
Abstract: This paper introduces random versions of successive approximations and multigrid algorithms for computing approximate solutions to a class of finite and infinite horizon Markovian decision problems (MDPs). We prove that these algorithms succeed in breaking the “curse of dimensionality ” for a subclass of MDPs known as discrete decision processes (DDPs). 1
Gaussian processes: inequalities, small ball probabilities and applications. Stochastic processes: theory and methods
 Handbook of Statist
, 2001
"... Edited by C.R. Rao and D. Shanbhag. 1 ..."
Probability laws related to the Jacobi theta and Riemann zeta functions, and the Brownian excursions
 Bulletin (New series) of the American Mathematical Society
"... Abstract. This paper reviews known results which connect Riemann’s integral representations of his zeta function, involving Jacobi’s theta function and its derivatives, to some particular probability laws governing sums of independent exponential variables. These laws are related to onedimensional ..."
Abstract

Cited by 57 (11 self)
 Add to MetaCart
Abstract. This paper reviews known results which connect Riemann’s integral representations of his zeta function, involving Jacobi’s theta function and its derivatives, to some particular probability laws governing sums of independent exponential variables. These laws are related to onedimensional Brownian motion and to higher dimensional Bessel processes. We present some characterizations of these probability laws, and some approximations of Riemann’s zeta function which are related to these laws. Contents
On the Capture Probability for a Large Number of Stations
 IEEE Transactions on Communications
, 1997
"... The probability of capture under a model based on the ratio of the largest received power to the sum of interference powers is examined in the limit of a large number of transmitting stations. It is shown in great generality that the limit depends only on the capture ratio threshold and the rolloff ..."
Abstract

Cited by 53 (1 self)
 Add to MetaCart
The probability of capture under a model based on the ratio of the largest received power to the sum of interference powers is examined in the limit of a large number of transmitting stations. It is shown in great generality that the limit depends only on the capture ratio threshold and the rolloff exponent of the distribution of power received from a typical station. This exponent is insensitive to many typical channel effects such as Rician or Rayleigh fading and lognormal shadowing. The model is suitable for large systems with noncoherently combined interference. 2 1 Introduction Consider the following power capture model. If n transmitters are present, the transmission of a given station j is captured if P R;j z 8 ! : X i6=j P R;i +N 9 = ; ; (1.1) where z is the power ratio threshold, P R;i is the received power at the base station due to transmitter i, and N is a nonnegative random variable that represents the effect of additive noise, such as receiver noise or interfer...
Estimation of a Convex Function: Characterizations and Asymptotic Theory
, 2000
"... Abstract: We study nonparametric estimation of convex regression and density functions by methods of least squares (in the regression and density cases) and maximum likelihood (in the density estimation case). We provide characterizations of these estimators, prove that they are consistent, and esta ..."
Abstract

Cited by 48 (20 self)
 Add to MetaCart
Abstract: We study nonparametric estimation of convex regression and density functions by methods of least squares (in the regression and density cases) and maximum likelihood (in the density estimation case). We provide characterizations of these estimators, prove that they are consistent, and establish their asymptotic distributions at a fixed point of positive curvature of the functions estimated. The asymptotic distribution theory relies on the existence of a "invelope function" for integrated twosided Brownian motion + t 4 which is established in the companion paper Groeneboom, Jongbloed and Wellner (2001a). 1 Research supported in part by National Science Foundation grant DMS9532039 and NIAID grant 2R01 AI29196804 AMS 2000 subject classifications. Primary: 62G05; secondary 62G07, 62G08, 62E20.
A Polytope Related to Empirical Distributions, Plane Trees, Parking Functions, and the Associahedron
"... The volume of the ndimensional polytope for arbitrary x := (x 1 ; : : : ; x n ) with x i > 0 for all i de nes a polynomial in variables x i which admits a number of interpretations, in terms of empirical distributions, plane partitions, and parking functions. We interpret the terms of this po ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
The volume of the ndimensional polytope for arbitrary x := (x 1 ; : : : ; x n ) with x i > 0 for all i de nes a polynomial in variables x i which admits a number of interpretations, in terms of empirical distributions, plane partitions, and parking functions. We interpret the terms of this polynomial as the volumes of chambers in two dierent polytopal subdivisions of n (x). The rst of these subdivisions generalizes to a class of polytopes called sections of order cones. In the second subdivision, the chambers are indexed in a natural way by rooted binary trees with n + 1 vertices, and the con guration of these chambers provides a representation of another polytope with many applications, the associahedron.
On the variance of the height of random binary search trees
 SIAM J
, 1995
"... Abstract. Let Hn be the height of a random binary search tree on n nodes. We show that there exist constants α = 4.311 ·· · and β = 1.953 ·· · such that E(Hn) = αln n − βln ln n + O(1), We also show that Var(Hn) = O(1). ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
Abstract. Let Hn be the height of a random binary search tree on n nodes. We show that there exist constants α = 4.311 ·· · and β = 1.953 ·· · such that E(Hn) = αln n − βln ln n + O(1), We also show that Var(Hn) = O(1).
Estimating the proportion of false null hypotheses among a large number of independently tested hypotheses
 ANN. STAT
, 2006
"... We consider the problem of estimating the number of false null hypotheses among a very large number of independently tested hypotheses, focusing on the situation in which the proportion of false null hypotheses is very small. We propose a family of methods for establishing lower 100(1 − α) % confide ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We consider the problem of estimating the number of false null hypotheses among a very large number of independently tested hypotheses, focusing on the situation in which the proportion of false null hypotheses is very small. We propose a family of methods for establishing lower 100(1 − α) % confidence bounds for this proportion, based on the empirical distribution of the pvalues of the tests. Methods in this family are then compared in terms of ability to consistently estimate the proportion by letting α → 0 as the number of hypothesis tests increases and the proportion decreases. This work is motivated by a signal detection problem that occurs in astronomy.
A Threestep Method for Choosing the Number of Bootstrap Repetitions
 Econometrica
, 2000
"... This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, pvalues, and bias correction. For each of these problems, the paper provides a threestep method for choosing B to achieve a ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, pvalues, and bias correction. For each of these problems, the paper provides a threestep method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test’s critical value, test’s pvalue, or biascorrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B��. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well.