Results 1  10
of
260
Deconvoluting kernel density estimators
 Statistics
, 1990
"... This paper considers estimation ofa continuous bounded probability density when observations from the density are contaminated by additive measurement errors having a known distribution. Properties of the estimator obtained by deconvolving a kernel estimator of the observed data are investigated. Wh ..."
Abstract

Cited by 97 (8 self)
 Add to MetaCart
This paper considers estimation ofa continuous bounded probability density when observations from the density are contaminated by additive measurement errors having a known distribution. Properties of the estimator obtained by deconvolving a kernel estimator of the observed data are investigated. When the kernel used is sufficiently smooth the deconvolved estimator is shown to be pointwise consistent and bounds on its integrated mean squared error are derived. Very weak assumptions are made on the measurementerror density thereby permitting a comparison of the effects of different types of measurement error on the deconvolved estimator.
Nonparametric regression with errors in variables
 Annals of Statistics
, 1993
"... The effect of errors in variables in nonparametric regression estimation is examined. To account for errors in covariates, deconvolution is involved in the construction ofa new class of kernel estimators. It is shown that optima/local and global rates of convergence of these kernel estimators can be ..."
Abstract

Cited by 79 (1 self)
 Add to MetaCart
The effect of errors in variables in nonparametric regression estimation is examined. To account for errors in covariates, deconvolution is involved in the construction ofa new class of kernel estimators. It is shown that optima/local and global rates of convergence of these kernel estimators can be characterized by the tail behavior of the characteristic function of the error distribution. In fact, there are two types of rates of convergence according to whether the error is ordinary smooth or super smooth. It is also shown that these results hold uniformly over a class of joint distributions of the response and the covariates, which includes ordinary smooth regression functions as well as covariates with distributions satisfying regularity conditions. Furthermore, to achieve optimality, we show that the convergence rates of all nonparametric estimators have a lower bound possessed by the kernel estimators. oAbbreviated title. Errorinvariable regression AMS 1980 subject classification. Primary 62G20. Secondary 62G05, 62J99. Key words and phrases. Nonparametric regression; Kernel estimator; Errors in variables; Optimal rates
Methodology and convergence rates for functional linear regression
, 2007
"... In functional linear regression, the slope “parameter ” is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an illposed problem and has points of contact with a range of methodologies, including statistical smoothi ..."
Abstract

Cited by 75 (7 self)
 Add to MetaCart
(Show Context)
In functional linear regression, the slope “parameter ” is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an illposed problem and has points of contact with a range of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigenfunctions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.
Wavelet Deconvolution
 IEEE Transactions on Information Theory
, 2002
"... This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estima ..."
Abstract

Cited by 64 (1 self)
 Add to MetaCart
(Show Context)
This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estimated wavelet coefficients. Minimax rates of convergence under the integrated square loss are studied over Besov classes Bσpq of functions for both ordinary smooth and supersmooth convolution kernels. The minimax rates of convergence depend on the smoothness of functions to be deconvolved and the decay rate of the characteristic function of convolution kernels. It is shown that no linear deconvolution estimators can achieve the optimal rates of convergence in the Besov spaces with p < 2 when the convolution kernel is ordinary smooth and super smooth. If the convolution kernel is ordinary smooth, then linear estimators can be improved by using thresholding wavelet deconvolution estimators which are asymptotically minimax within logarithmic terms. Adaptive minimax properties of thresholding wavelet deconvolution estimators are also discussed. Keywords. Adaptive estimation, Besov spaces, KullbackLeibler information, linear estimators, minimax estimation, thresholding, wavelet bases.
Schennach (2008) “Instrumental Variable Treatment of Nonclassical Measurement Error Models
 Econometrica
"... The copyright to this Article is held by the Econometric Society. It may be downloaded, printed and reproduced only for educational or research purposes, including use in course packs. No downloading or copying may be done for any commercial purpose without the explicit permission of the Econometric ..."
Abstract

Cited by 59 (18 self)
 Add to MetaCart
The copyright to this Article is held by the Econometric Society. It may be downloaded, printed and reproduced only for educational or research purposes, including use in course packs. No downloading or copying may be done for any commercial purpose without the explicit permission of the Econometric Society. For such commercial purposes contact the Office of the Econometric Society (contact information may be found at the website
Convergence rates of general regularization methods for statistical inverse problems and applications
, 2007
"... Abstract. During the past the convergence analysis for linear statistical inverse problems has mainly focused on spectral cutoff and Tikhonov type estimators. Spectral cutoff estimators achieve minimax rates for a broad range of smoothness classes and operators, but their practical usefulness is l ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
(Show Context)
Abstract. During the past the convergence analysis for linear statistical inverse problems has mainly focused on spectral cutoff and Tikhonov type estimators. Spectral cutoff estimators achieve minimax rates for a broad range of smoothness classes and operators, but their practical usefulness is limited by the fact that they require a complete spectral decomposition of the operator. Tikhonov estimators are simpler to compute, but still involve the inversion of an operator and achieve minimax rates only in restricted smoothness classes. In this paper we introduce a unifying technique to study the mean square error of a large class of regularization methods (spectral methods) including the aforementioned estimators as well as many iterative methods, such as νmethods and the Landweber iteration. The latter estimators converge at the same rate as spectral cutoff, but only require matrixvector products. Our results are applied to various problems, in particular we obtain precise convergence rates for satellite gradiometry, L2boosting, and errors in variable problems. AMS subject classifications. 62G05, 62J05, 62P35, 65J10, 35R30 Key words. Statistical inverse problems, iterative regularization methods, Tikhonov regularization, nonpara
Sharp optimality for density deconvolution with dominating bias
 Theor. Probab. Appl
, 2005
"... bias ..."
Identification and Estimation in Highway Procurement Auctions under Unobserved Auction Heterogeneity
, 2004
"... The accurate assessment of participants’ private information may critically affect policy recommendations in auction markets. In many auction environments estimation of the private information distribution may be complicated by the presence of unobserved heterogeneity. This problem arises when some ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
The accurate assessment of participants’ private information may critically affect policy recommendations in auction markets. In many auction environments estimation of the private information distribution may be complicated by the presence of unobserved heterogeneity. This problem arises when some of the information available to all bidders at the time of the auction is subsequently not observed by the researcher. This paper develops a semiparametric method that allows a researcher to uncover the distribution of bidders’ private information in a standard FirstPrice procurement auction when unobserved auction heterogeneity is present. Sufficient identification conditions are derived and a twostage estimation procedure to recover bidders’ private information is developed. The procedure is applied to data from Michigan highway procurement auctions and compared to the estimation procedures traditionally used in the context of highway procurement auctions. The estimation results suggest that ignoring unobserved auction heterogeneity is likely to result in substantially biased estimates and may lead to erroneous policy recommendations.
Nonparametric estimation for Lévy processes from lowfrequency observations
 Bernoulli 15, 223–248. J.M. BARDET AND D. SURGAILIS
, 2009
"... We suppose that a Lévy process is observed at discrete time points. A rather general construction of minimumdistance estimators is shown to give consistent estimators of the LévyKhinchine characteristics as the number of observations tends to infinity, keeping the observation distance fixed. For a ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
We suppose that a Lévy process is observed at discrete time points. A rather general construction of minimumdistance estimators is shown to give consistent estimators of the LévyKhinchine characteristics as the number of observations tends to infinity, keeping the observation distance fixed. For a specific C 2criterion this estimator is rateoptimal. The connection with deconvolution and inverse problems is explained. A key step in the proof is a uniform control on the deviations of the empirical characteristic function on the whole real line. 2000 Mathematics Subject Classification. Primary 62G15; secondary 62M15. Keywords and Phrases. LévyKhinchine characteristics, density estimation, minimum distance estimator, deconvolution. Short title. Nonparametric estimation for Lévy processes. 1 1.