Results 1 
9 of
9
Oracle Inequalities for Inverse Problems
, 2000
"... We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a finite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f . Under general conditions, we show that this can be achieved by simple minimization of unbiased risk estimator, provided the singular values of the operator of the inverse problem decrease as a power law. The main result is a nonasymptotic oracle inequality that is shown to be asymptotically exact. This inequality can be also used to obtain sharp minimax adaptive results. In particular, we apply it to show that minimax adaptation on ellipsoids in multivariate anisotropic case is realized by minimization of unbiased risk estimator without any loss of efficiency with respect to optimal nonadaptive procedures. Mathematics Subject Classifications: 62G05, 62G20 Key Words: Statistical inverse problems, Oracle inequaliti...
Sharp Adaptation for Inverse Problems With Random Noise
, 2000
"... We consider a heteroscedastic sequence space setup with polynomially increasing variances of observations that allows to treat a number of inverse problems, in particular multivariate ones. We propose an adaptive estimator that attains simultaneously exact asymptotic minimax constants on every ellip ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
We consider a heteroscedastic sequence space setup with polynomially increasing variances of observations that allows to treat a number of inverse problems, in particular multivariate ones. We propose an adaptive estimator that attains simultaneously exact asymptotic minimax constants on every ellipsoid of functions within a wide scale (that includes ellipoids with polynomially and exponentially decreasing axes) and, at the same time, satisfies asymptotically exact oracle inequalities within any class of linear estimates having monotone nondecreasing weights. As application, we construct sharp adaptive estimators in the problems of deconvolution and tomography.
General empirical Bayes wavelet methods and exactly adaptive minimax estimation

, 2005
"... In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risk ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In many statistical problems, stochastic signals can be represented as a sequence of noisy wavelet coefficients. In this paper, we develop general empirical Bayes methods for the estimation of true signal. Our estimators approximate certain oracle separable rules and achieve adaptation to ideal risks and exact minimax risks in broad collections of classes of signals. In particular, our estimators are uniformly adaptive to the minimum risk of separable estimators and the exact minimax risks simultaneously in Besov balls of all smoothness and shape indices, and they are uniformly superefficient in convergence rates in all compact sets in Besov spaces with a finite secondary shape parameter. Furthermore, in classes nested between Besov balls of the same smoothness index, our estimators dominate threshold and James–Stein estimators within an infinitesimal fraction of the minimax risks. More general block empirical Bayes estimators are developed. Both white noise with drift and nonparametric regression are considered.
Linear and convex aggregation of density estimators
, 2004
"... We study the problem of learning the best linear and convex combination of M estimators of a density with respect to the mean squared risk. We suggest aggregation procedures and we prove sharp oracle inequalities for their risks, i.e., oracle inequalities with leading constant 1. We also obtain lowe ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We study the problem of learning the best linear and convex combination of M estimators of a density with respect to the mean squared risk. We suggest aggregation procedures and we prove sharp oracle inequalities for their risks, i.e., oracle inequalities with leading constant 1. We also obtain lower bounds showing that these procedures attain optimal rates of aggregation. As an example, we consider aggregation of multivariate kernel density estimators with different bandwidths. We show that linear and convex aggregates mimic the kernel oracles in asymptotically exact sense. We prove that, for Pinsker’s kernel, the proposed aggregates are sharp asymptotically minimax simultaneously over a large scale of Sobolev classes of densities. Finally, we provide simulations demonstrating performance of the convex aggregation procedure.
On minimax density estimation on R
 Bernoulli
, 2004
"... Abstract: the problem of density estimation on R from an independent sample X1,...XN with common density f is concerned. The behavior of the minimax Lprisk, 1 ≤ p ≤ ∞, is studied when f belongs to a Hölder class of regularity s on the real line. The lower bound for the minimax risk is provided. We ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Abstract: the problem of density estimation on R from an independent sample X1,...XN with common density f is concerned. The behavior of the minimax Lprisk, 1 ≤ p ≤ ∞, is studied when f belongs to a Hölder class of regularity s on the real line. The lower bound for the minimax risk is provided. We show that the linear estimator is not efficient in this setting and construct a wavelet adaptive estimator which attains (up to a logarithmic factor in N) the lower bounds involved. We show that the minimax risk depends on the parameter p when p < 2 + 1 s. Key words: nonparametric density estimation, minimax estimation, adaptive estimation. 1
ESTIMATION OF THE DENSITY OF REGRESSION ERRORS
, 2004
"... Estimation of the density of regression errors is a fundamental issue in regression analysis and it is typically explored via a parametric approach. This article uses a nonparametric approach with the mean integrated squared error (MISE) criterion. It solves a longstanding problem, formulated two d ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Estimation of the density of regression errors is a fundamental issue in regression analysis and it is typically explored via a parametric approach. This article uses a nonparametric approach with the mean integrated squared error (MISE) criterion. It solves a longstanding problem, formulated two decades ago by Mark Pinsker, about estimation of a nonparametric error density in a nonparametric regression setting with the accuracy of an oracle that knows the underlying regression errors. The solution implies that, under a mild assumption on the differentiability of the design density and regression function, the MISE of a datadriven error density estimator attains minimax rates and sharp constants known for the case of directly observed regression errors. The result holds for error densities with finite and infinite supports. Some extensions of this result for more general heteroscedastic models with possibly dependent errors and predictors are also obtained; in the latter case the marginal error density is estimated. In all considered cases a blockwiseshrinking Efromovich– Pinsker density estimate, based on pluggedin residuals, is used. The obtained results imply a theoretical justification of a customary practice in applied regression analysis to consider residuals as proxies for underlying regression errors. Numerical and real examples are presented and discussed, and the SPLUS software is available. 1. Introduction. A
EXACT MINIMAX RISK FOR DENSITY ESTIMATORS IN NONINTEGER SOBOLEV CLASSES
, 2008
"... The L2minimax risk in Sobolev classes of densities with noninteger smoothness index is shown to have an analog form to that in integer Sobolev classes. To this end, the notion of Sobolev classes is generalized to fractional derivatives of order β ∈ R +. A minimax kernel density estimator for such ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The L2minimax risk in Sobolev classes of densities with noninteger smoothness index is shown to have an analog form to that in integer Sobolev classes. To this end, the notion of Sobolev classes is generalized to fractional derivatives of order β ∈ R +. A minimax kernel density estimator for such a classes is found. Although there exists no corresponding proof in the literature so far, the result of this article was used implicitly in numerous papers. A certain necessity that this gap had to be filled, can thus not be denied.
SPADES AND MIXTURE MODELS 1
"... This paper studies sparse density estimation via ℓ1 penalization (SPADES). We focus on estimation in highdimensional mixture models and nonparametric adaptive density estimation. We show, respectively, that SPADES can recover, with high probability, the unknown components of a mixture of probabilit ..."
Abstract
 Add to MetaCart
This paper studies sparse density estimation via ℓ1 penalization (SPADES). We focus on estimation in highdimensional mixture models and nonparametric adaptive density estimation. We show, respectively, that SPADES can recover, with high probability, the unknown components of a mixture of probability densities and that it yields minimax adaptive density estimates. These results are based on a general sparsity oracle inequality that the SPADES estimates satisfy. We offer a data driven method for the choice of the tuning parameter used in the construction of SPADES. The method uses the generalized bisection method first introduced in [10]. The suggested procedure bypasses the need for a grid search and offers substantial computational savings. We complement our theoretical results with a simulation study that employs this method for approximations of one and twodimensional densities with mixtures. The numerical results strongly support our theoretical findings. 1. Introduction. Let X1,...,Xn be independent random variables with common
SPADES AND MIXTURE MODELS FLORENTINA BUNEA
"... Abstract. This paper studies sparse density estimation via ℓ1 penalization (SPADES). We focus on estimation in highdimensional mixture models and nonparametric adaptive density estimation. We show, respectively, that SPADES can recover, with high probability, the unknown components of a mixture of ..."
Abstract
 Add to MetaCart
Abstract. This paper studies sparse density estimation via ℓ1 penalization (SPADES). We focus on estimation in highdimensional mixture models and nonparametric adaptive density estimation. We show, respectively, that SPADES can recover, with high probability, the unknown components of a mixture of probability densities and that it yields minimax adaptive density estimates. These results are based on a general sparsity oracle inequality that the SPADES estimates satisfy.