Results 1  10
of
47
Kernel Mean Estimation and Stein Effect
"... A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is an important part of many algorithms ranging from kernel principal component analysis to Hilbertspace embedding of distributions. Given a finite sample, an empirical average is the standard estimate for the true ke ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
kernel mean. We show that this estimator can be improved due to a wellknown phenomenon in statistics called Stein’s phenomenon. After consideration, our theoretical analysis reveals the existence of a wide class of estimators that are better than the standard one. Focusing on a subset of this class
Supplementary Material to Kernel Mean Estimation and Stein Effect
"... Stein’s result has transformed common belief in statistical world that the maximum likelihood estimator, which is in common use for more than a century, is optimal. Charles Stein showed in 1955 that it is possible to uniformly improve the maximum likelihood estimator (MLE) for the Gaussian model in ..."
Abstract
 Add to MetaCart
Stein’s result has transformed common belief in statistical world that the maximum likelihood estimator, which is in common use for more than a century, is optimal. Charles Stein showed in 1955 that it is possible to uniformly improve the maximum likelihood estimator (MLE) for the Gaussian model
Kernel Mean Shrinkage Estimators
, 2016
"... Abstract A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis, and it also forms the core inference step of modern kernel methods that rely on embeddin ..."
Abstract
 Add to MetaCart
on embedding probability distributions in RKHSs. Given a finite sample, an empirical average has been used commonly as a standard estimator of the true kernel mean. Despite a widespread use of this estimator, we show that it can be improved thanks to the wellknown Stein phenomenon. We propose a new family
Adaptive density estimation using the blockwise Stein method
, 2006
"... We study the problem of nonparametric estimation of a probability density of unknown smoothness in L2(R). Expressing mean integrated squared error (MISE) in the Fourier domain, we show that it is close to mean squared error in the Gaussian sequence model. Then applying a modified version of Stein’s ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We study the problem of nonparametric estimation of a probability density of unknown smoothness in L2(R). Expressing mean integrated squared error (MISE) in the Fourier domain, we show that it is close to mean squared error in the Gaussian sequence model. Then applying a modified version of Stein’s
JamesStein Shrinkage to Improve Kmeans Cluster Analysis
, 2009
"... We study a general algorithm to improve accuracy in cluster analysis that employs the JamesStein shrinkage effect in kmeans clustering. We shrink the centroids of clusters toward the overall mean of all data using a JamesSteintype adjustment, and then the JamesStein shrinkage estimators act as ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We study a general algorithm to improve accuracy in cluster analysis that employs the JamesStein shrinkage effect in kmeans clustering. We shrink the centroids of clusters toward the overall mean of all data using a JamesSteintype adjustment, and then the JamesStein shrinkage estimators act
SUREBased NonLocal Means
"... Abstract—Nonlocal means (NLM) provides a powerful framework for denoising. However, there are a few parameters of the algorithm—most notably, the width of the smoothing kernel—that are datadependent and difficult to tune. Here, we propose to use Stein’s unbiased risk estimate (SURE) to monitor the ..."
Abstract
 Add to MetaCart
Abstract—Nonlocal means (NLM) provides a powerful framework for denoising. However, there are a few parameters of the algorithm—most notably, the width of the smoothing kernel—that are datadependent and difficult to tune. Here, we propose to use Stein’s unbiased risk estimate (SURE) to monitor
Trading Variance Reduction with Unbiasedness  The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression
 NEURAL COMPUTATION
, 2004
"... A wellknown result by Stein (1956) shows that in particular situations, biased estimators can yield better parameter estimates than their generally preferred unbiased counterparts. This paper follows the same spirit as we will stabilize the unbiased generalization error estimates by regularizati ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
A wellknown result by Stein (1956) shows that in particular situations, biased estimators can yield better parameter estimates than their generally preferred unbiased counterparts. This paper follows the same spirit as we will stabilize the unbiased generalization error estimates
On Denoising and Best Signal Representation
 IEEE Trans. Inform. Theory
, 1999
"... Abstract — We propose a best basis algorithm for signal enhancement in white Gaussian noise. The best basis search is performed in families of orthonormal bases constructed with wavelet packets or local cosine bases. We base our search for the “best ” basis on a criterion of minimal reconstruction e ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
, which consequently contribute to effective denoising. These approaches, however, do not possess the inherent measure of performance which our algorithm provides. We first propose an estimator of the meansquare error, based on a heuristic argument and subsequently compare the reconstruction performance
ClusteringBased Denoising With Locally Learned Dictionaries
, 2009
"... In this paper, we propose KLLD: a patchbased, locally adaptive denoising method based on clustering the given noisy image into regions of similar geometric structure. In order to effectively perform such clustering, we employ as features the local weight functions derived from our earlier work on ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
basis describing the patches within that cluster using principal components analysis. This learned basis (or “dictionary”) is then employed to optimally estimate the underlying pixel values using a kernel regression framework. An iterated version of the proposed algorithm is also presented which leads
Results 1  10
of
47