Results 1  10
of
409
ASYMPTOTIC MINIMAX ESTIMATION IN NONPARAMETRIC AUTOREGRESSION
"... We develop asymptotic theory for nonparametric estimators of the autoregression function. To deal with irregularities in the pattern of explanatory variables caused by their randomness, we propose a new estimator which is a modification of the Priestley–Chao kernel method. It is shown that this esti ..."
Abstract
 Add to MetaCart
that this estimator has similar asymptotic properties to standard estimators of kernel type. We establish an asymptotic lower bound to the minimax risk in Sobolev classes and show that our modified Priestley–Chao estimator can get arbitrarily close to this efficiency bound. Key words: exact asymptotics, minimax risk
Lower Bounds for the Asymptotic Minimax Risk With . . .
"... Lower bounds for the asymptotic minimax risk are given when estimating the density function with spherical data. Both the pointwise risk and the integrated risk are studied. Results are derived by constructing a sequence of experiments that converge weakly to a Gaussian shift experiment. Lower bound ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Lower bounds for the asymptotic minimax risk are given when estimating the density function with spherical data. Both the pointwise risk and the integrated risk are studied. Results are derived by constructing a sequence of experiments that converge weakly to a Gaussian shift experiment. Lower
Minimax bayes, asymptotic minimax and sparse wavelet priors, in
 Sciences Paris (A
, 1994
"... Pinsker(1980) gave a precise asymptotic evaluation of the minimax mean squared error of estimation of a signal in Gaussian noise when the signal is known a priori to lie in a compact ellipsoid in Hilbert space. This `Minimax Bayes ' method can be applied to a variety of global nonparametric es ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
Pinsker(1980) gave a precise asymptotic evaluation of the minimax mean squared error of estimation of a signal in Gaussian noise when the signal is known a priori to lie in a compact ellipsoid in Hilbert space. This `Minimax Bayes ' method can be applied to a variety of global non
Asymptotic minimaxity of wavelet estimators with sampled data’, Statist
 Sinica
, 1999
"... Donoho and Johnstone (1997) studied a setting where data were obtained in the continuum white noise model and showed that scalar nonlinearities applied to wavelet coefficients gave estimators which were asymptotically minimax over Besov balls. They claimed that this implied similar asymptotic minima ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
Donoho and Johnstone (1997) studied a setting where data were obtained in the continuum white noise model and showed that scalar nonlinearities applied to wavelet coefficients gave estimators which were asymptotically minimax over Besov balls. They claimed that this implied similar asymptotic
Asymptotically minimax Bayes predictive densities
, 2002
"... Given a random sample from a distribution with density function that depends on an unknown parameter θ, we are interested in accurately estimating the true parametric density function at a future observation from the same distribution. The asymptotic risk of Bayes predictive density estimates with K ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
with Kullback–Leibler loss function D(fθ  ˆ f) = ∫ fθ log(fθ / ˆ f) is used to examine various ways of choosing prior distributions; the principal type of choice studied is minimax. We seek asymptotically least favorable predictive densities for which the corresponding asymptotic risk is minimax. A result
Model selection and sharp asymptotic minimaxity
, 2010
"... We obtain sharp minimax results for estimation of an n dimensional normal mean under quadratic loss. The estimators are chosen by penalized least squares with a penalty that grows like ck log(n=k), for k equal to the number of nonzero elements in the estimating vector. For a wide range of sparse par ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We obtain sharp minimax results for estimation of an n dimensional normal mean under quadratic loss. The estimators are chosen by penalized least squares with a penalty that grows like ck log(n=k), for k equal to the number of nonzero elements in the estimating vector. For a wide range of sparse
Achievability of Asymptotic Minimax Regret in Online and Batch Prediction
"... The normalized maximum likelihood model achieves the minimax coding (logloss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally ..."
Abstract
 Add to MetaCart
computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by batch strategies (i.e., strategies that depend on n) as well as online strategies (i.e., strategies independent of n). On one hand, we conjecture that for a large class of models
Wavelet Method and Asymptotically Minimax Estimation of Regression
"... We attempt to recover a regression function from noisy data. It is assumed that the underlying function is a piecewise entire analytic function. Types and the number of singularities are assumed to be unknown. We show how to chose smoothing parameters and a wavelet basis to achieve the asymptoticall ..."
Abstract
 Add to MetaCart
the asymptotically minimax risk up to the constant. 1 Introduction Efficient computational implementation has made wavelets very popular in nonparametric estimation. It is wellknown that in most cases wavelet based estimators are almost rate optimal. See e.g. Donoho & Johnstone (1995), (1998), Donoho &
Nonasymptotic minimax rates of testing in signal detection
 Bernoulli
, 2002
"... Abstract. Let Y = (Yi)i∈I be a finite or countable sequence of independent Gaussian random variables of mean f = (fi)i∈I and common variance σ2. For various sets F ⊂ `2(I), the aim of this paper is to describe the minimal `2distance between f and 0 for the problem of testing “f = 0 ” against “f 6 = ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
the cases where F is an ellipsoid and more generally an `pbody with p ∈]0, 2]. Our results are not asymptotic in the sense that we do not assume that σ tends to 0. Finally, we consider the problem of adaptive testing. 1.
The asymptotic minimax risk for the estimation of constrained binomial and multinomial probabilities. Sankhya
, 2004
"... In this paper we present a direct and simple approach to obtain bounds on the asymptotic minimax risk for the estimation of constrained binomial and multinomial proportions. Quadratic, normalized quadratic and entropy loss are considered and it is demonstrated that in all cases linear estimators are ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper we present a direct and simple approach to obtain bounds on the asymptotic minimax risk for the estimation of constrained binomial and multinomial proportions. Quadratic, normalized quadratic and entropy loss are considered and it is demonstrated that in all cases linear estimators
Results 1  10
of
409