Results 1  10
of
36
Nonparametric regression with errors in variables
 Annals of Statistics
, 1993
"... The effect of errors in variables in nonparametric regression estimation is examined. To account for errors in covariates, deconvolution is involved in the construction ofa new class of kernel estimators. It is shown that optima/local and global rates of convergence of these kernel estimators can be ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
The effect of errors in variables in nonparametric regression estimation is examined. To account for errors in covariates, deconvolution is involved in the construction ofa new class of kernel estimators. It is shown that optima/local and global rates of convergence of these kernel estimators can be characterized by the tail behavior of the characteristic function of the error distribution. In fact, there are two types of rates of convergence according to whether the error is ordinary smooth or super smooth. It is also shown that these results hold uniformly over a class of joint distributions of the response and the covariates, which includes ordinary smooth regression functions as well as covariates with distributions satisfying regularity conditions. Furthermore, to achieve optimality, we show that the convergence rates of all nonparametric estimators have a lower bound possessed by the kernel estimators. oAbbreviated title. Errorinvariable regression AMS 1980 subject classification. Primary 62G20. Secondary 62G05, 62J99. Key words and phrases. Nonparametric regression; Kernel estimator; Errors in variables; Optimal rates
Dirichlet Prior Sieves in Finite Normal Mixtures
 Statistica Sinica
, 2002
"... Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related to the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related to the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive use of the popular uniform Dirichlet prior leads to an inconsistent posterior. However, a simple adjustment to the parameters in the prior induces a random probability measure that approximates the Dirichlet process and yields a posterior that is strongly consistent for the density and weakly consistent for the unknown mixing distribution. The dimension of the resulting sieve can be selected easily in practice and a simple and efficient Gibbs sampler can be used to sample the posterior of the mixing distribution. Key words and phrases: BoseEinstein distribution, Dirichlet process, identification, method of sieves, random probability measure, relative entropy, weak convergence.
Wavelet Deconvolution
 IEEE Transactions on Information Theory
, 2002
"... This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estima ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estimated wavelet coefficients. Minimax rates of convergence under the integrated square loss are studied over Besov classes Bσpq of functions for both ordinary smooth and supersmooth convolution kernels. The minimax rates of convergence depend on the smoothness of functions to be deconvolved and the decay rate of the characteristic function of convolution kernels. It is shown that no linear deconvolution estimators can achieve the optimal rates of convergence in the Besov spaces with p < 2 when the convolution kernel is ordinary smooth and super smooth. If the convolution kernel is ordinary smooth, then linear estimators can be improved by using thresholding wavelet deconvolution estimators which are asymptotically minimax within logarithmic terms. Adaptive minimax properties of thresholding wavelet deconvolution estimators are also discussed. Keywords. Adaptive estimation, Besov spaces, KullbackLeibler information, linear estimators, minimax estimation, thresholding, wavelet bases.
Estimating the null and the proportion of nonnull effects in largescale multiple comparisons
 J. Amer. Statist. Assoc
, 2007
"... An important issue raised by Efron [7] in the context of largescale multiple comparisons is that in many applications the usual assumption that the null distribution is known is incorrect, and seemingly negligible differences in the null may result in large differences in subsequent studies. This s ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
An important issue raised by Efron [7] in the context of largescale multiple comparisons is that in many applications the usual assumption that the null distribution is known is incorrect, and seemingly negligible differences in the null may result in large differences in subsequent studies. This suggests that a careful study of estimation of the null is indispensable. In this paper, we consider the problem of estimating a null normal distribution, and a closely related problem, estimation of the proportion of nonnull effects. We develop an approach based on the empirical characteristic function and Fourier analysis. The estimators are shown to be uniformly consistent over a wide class of parameters. Numerical performance of the estimators is investigated using both simulated and real data. In particular, we apply our
Sharp optimality for density deconvolution with dominating bias
 Theor. Probab. Appl
, 2005
"... bias ..."
General empirical Bayes wavelet methods and exactly adaptive minimax estimation

, 2005
"... In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risk ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risks and exact minimax risks in broad collections of classes of signals. In particular, our estimators are uniformly adaptive to the minimum risk of separable estimators and the exact minimax risks simultaneously in Besov balls of all smoothness and shape indices, and they are uniformly superefficient in convergence rates in all compact sets in Besov spaces with a finite secondary shape parameter. Furthermore, in classes nested between Besov balls of the same smoothness index, our estimators dominate threshold and James–Stein estimators within an infinitesimal fraction of the minimax risks. More general block empirical Bayes estimators are developed. Both white noise with drift and nonparametric regression are considered.
Asymptotic normality of nonparametric kernek type deconvolution density estimators: crossing the Cauchy boundary
 J. Nonparametr. Stat
, 2004
"... We derive asymptotic normality of kernel type deconvolution density estimators. In particular we consider deconvolution problems where the known component of the convolution has a symmetric λstable distribution with 0 < λ ≤ 2. It turns out that the limit behavior changes if the exponent parameter λ ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We derive asymptotic normality of kernel type deconvolution density estimators. In particular we consider deconvolution problems where the known component of the convolution has a symmetric λstable distribution with 0 < λ ≤ 2. It turns out that the limit behavior changes if the exponent parameter λ passes the value one, the case of Cauchy deconvolution. AMS classification: primary 62G05; secondary 62E20
Asymptotic Normality of Kernel Type Deconvolution Estimators
, 2001
"... We derive asymptotic normality of kernel type deconvolution estimators of the density, the distribution function at a xed point, and of the probability of an interval. We consider the so called super smooth case where the characteristic function of the known distribution decreases exponentially. It ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We derive asymptotic normality of kernel type deconvolution estimators of the density, the distribution function at a xed point, and of the probability of an interval. We consider the so called super smooth case where the characteristic function of the known distribution decreases exponentially. It turns out that the limit behavior of the pointwise estimators of the density and distribution function is relatively straightforward while the asymptotics of the estimator of the probability of an interval depends in a complicated way on the sequence of bandwidths.