Results 1  10
of
52
Nonparametric regression with errors in variables
 Annals of Statistics
, 1993
"... The effect of errors in variables in nonparametric regression estimation is examined. To account for errors in covariates, deconvolution is involved in the construction ofa new class of kernel estimators. It is shown that optima/local and global rates of convergence of these kernel estimators can be ..."
Abstract

Cited by 84 (1 self)
 Add to MetaCart
The effect of errors in variables in nonparametric regression estimation is examined. To account for errors in covariates, deconvolution is involved in the construction ofa new class of kernel estimators. It is shown that optima/local and global rates of convergence of these kernel estimators can be characterized by the tail behavior of the characteristic function of the error distribution. In fact, there are two types of rates of convergence according to whether the error is ordinary smooth or super smooth. It is also shown that these results hold uniformly over a class of joint distributions of the response and the covariates, which includes ordinary smooth regression functions as well as covariates with distributions satisfying regularity conditions. Furthermore, to achieve optimality, we show that the convergence rates of all nonparametric estimators have a lower bound possessed by the kernel estimators. oAbbreviated title. Errorinvariable regression AMS 1980 subject classification. Primary 62G20. Secondary 62G05, 62J99. Key words and phrases. Nonparametric regression; Kernel estimator; Errors in variables; Optimal rates
Wavelet Deconvolution
 IEEE Transactions on Information Theory
, 2002
"... This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estima ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
(Show Context)
This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estimated wavelet coefficients. Minimax rates of convergence under the integrated square loss are studied over Besov classes Bσpq of functions for both ordinary smooth and supersmooth convolution kernels. The minimax rates of convergence depend on the smoothness of functions to be deconvolved and the decay rate of the characteristic function of convolution kernels. It is shown that no linear deconvolution estimators can achieve the optimal rates of convergence in the Besov spaces with p < 2 when the convolution kernel is ordinary smooth and super smooth. If the convolution kernel is ordinary smooth, then linear estimators can be improved by using thresholding wavelet deconvolution estimators which are asymptotically minimax within logarithmic terms. Adaptive minimax properties of thresholding wavelet deconvolution estimators are also discussed. Keywords. Adaptive estimation, Besov spaces, KullbackLeibler information, linear estimators, minimax estimation, thresholding, wavelet bases.
Sharp optimality for density deconvolution with dominating bias
 II. Theory of Probability and its Applications
, 2007
"... We consider estimation of the common probability density f of i.i.d. random variables Xi that are observed with an additive i.i.d. noise. We assume that the unknown density f belongs to a class A of densities whose characteristic function is described by the exponent exp(−αur) as u  → ∞, where ..."
Abstract

Cited by 56 (6 self)
 Add to MetaCart
We consider estimation of the common probability density f of i.i.d. random variables Xi that are observed with an additive i.i.d. noise. We assume that the unknown density f belongs to a class A of densities whose characteristic function is described by the exponent exp(−αur) as u  → ∞, where α> 0, r> 0. The noise density is supposed to be known and such that its characteristic function decays as exp(−βus), as u  → ∞, where β> 0, s> 0. Assuming that r < s, we suggest a kernel type estimator whose variance turns out to be asymptotically negligible w.r.t. its squared bias both under the pointwise and L2risks. We prove in Part II that this estimator is optimal in sharp asymptotical minimax sense on A simultaneously under the pointwise and the L2risks. For r < s/2 we construct a sharp adaptive estimator of f.
Dirichlet Prior Sieves in Finite Normal Mixtures
 Statistica Sinica
, 2002
"... Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related to the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
Abstract: The use of a finite dimensional Dirichlet prior in the finite normal mixture model has the effect of acting like a Bayesian method of sieves. Posterior consistency is directly related to the dimension of the sieve and the choice of the Dirichlet parameters in the prior. We find that naive use of the popular uniform Dirichlet prior leads to an inconsistent posterior. However, a simple adjustment to the parameters in the prior induces a random probability measure that approximates the Dirichlet process and yields a posterior that is strongly consistent for the density and weakly consistent for the unknown mixing distribution. The dimension of the resulting sieve can be selected easily in practice and a simple and efficient Gibbs sampler can be used to sample the posterior of the mixing distribution. Key words and phrases: BoseEinstein distribution, Dirichlet process, identification, method of sieves, random probability measure, relative entropy, weak convergence.
Estimating the null and the proportion of nonnull effects in largescale multiple comparisons
 J. Amer. Statist. Assoc
, 2007
"... An important issue raised by Efron [7] in the context of largescale multiple comparisons is that in many applications the usual assumption that the null distribution is known is incorrect, and seemingly negligible differences in the null may result in large differences in subsequent studies. This s ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
(Show Context)
An important issue raised by Efron [7] in the context of largescale multiple comparisons is that in many applications the usual assumption that the null distribution is known is incorrect, and seemingly negligible differences in the null may result in large differences in subsequent studies. This suggests that a careful study of estimation of the null is indispensable. In this paper, we consider the problem of estimating a null normal distribution, and a closely related problem, estimation of the proportion of nonnull effects. We develop an approach based on the empirical characteristic function and Fourier analysis. The estimators are shown to be uniformly consistent over a wide class of parameters. Numerical performance of the estimators is investigated using both simulated and real data. In particular, we apply our
General empirical Bayes wavelet methods and exactly adaptive minimax estimation

, 2005
"... In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risk ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risks and exact minimax risks in broad collections of classes of signals. In particular, our estimators are uniformly adaptive to the minimum risk of separable estimators and the exact minimax risks simultaneously in Besov balls of all smoothness and shape indices, and they are uniformly superefficient in convergence rates in all compact sets in Besov spaces with a finite secondary shape parameter. Furthermore, in classes nested between Besov balls of the same smoothness index, our estimators dominate threshold and James–Stein estimators within an infinitesimal fraction of the minimax risks. More general block empirical Bayes estimators are developed. Both white noise with drift and nonparametric regression are considered.
Deconvolution with unknown error distribution. Forthcoming
 in Annals of Statistics
, 2009
"... We consider the problem of estimating a density fX using a sample Y1,...,Yn from fY = fX ⋆ fǫ, where fǫ is an unknown density. We assume that an additional sample ǫ1,...,ǫm from fǫ is observed. Estimators of fX and its derivatives are constructed by using nonparametric estimators of fY and fǫ and by ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating a density fX using a sample Y1,...,Yn from fY = fX ⋆ fǫ, where fǫ is an unknown density. We assume that an additional sample ǫ1,...,ǫm from fǫ is observed. Estimators of fX and its derivatives are constructed by using nonparametric estimators of fY and fǫ and by applying a spectral cutoff in the Fourier domain. We derive the rate of convergence of the estimators in case of a known and unknown error density fǫ, where it is assumed that fX satisfies a polynomial, logarithmic or general source condition. It is shown that the proposed estimators are asymptotically optimal in a minimax sense in the models with known or unknown error density, if the density fX belongs to a Sobolev space H and fǫ is ordinary smooth or supersmooth. 1. Introduction. Let
Asymptotic normality of nonparametric kernek type deconvolution density estimators: crossing the Cauchy boundary
 J. Nonparametr. Stat
, 2004
"... We derive asymptotic normality of kernel type deconvolution density estimators. In particular we consider deconvolution problems where the known component of the convolution has a symmetric λstable distribution with 0 < λ ≤ 2. It turns out that the limit behavior changes if the exponent paramete ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We derive asymptotic normality of kernel type deconvolution density estimators. In particular we consider deconvolution problems where the known component of the convolution has a symmetric λstable distribution with 0 < λ ≤ 2. It turns out that the limit behavior changes if the exponent parameter λ passes the value one, the case of Cauchy deconvolution. AMS classification: primary 62G05; secondary 62E20