Results 1 
8 of
8
Universality, Characteristic Kernels and RKHS Embedding of Measures
"... Over the last few years, two different notions of positive definite (pd) kernels—universal and characteristic—have been developing in parallel in machine learning: universal kernels are proposed in the context of achieving the Bayes risk by kernelbased classification/regression algorithms while cha ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
characteristic kernels are introduced in the context of distinguishing probability measures by embedding them into a reproducing kernel Hilbert space (RKHS). However, the relation between these two notions is not well understood. The main contribution of this paper is to clarify the relation between universal
Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions
"... Embeddings of probability measures into reproducing kernel Hilbert spaces have been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over man ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
Embeddings of probability measures into reproducing kernel Hilbert spaces have been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over
On the relation between universality, characteristic kernels and RKHS embedding of measures
 Proc. 13 th International Conference on Artificial Intelligence and Statistics, volume 9 of Workshop and Conference Proceedings. JMLR, 2010a
"... embedding of measures ..."
Hilbert Space Embeddings and Metrics on Probability Measures
"... A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseu ..."
Abstract

Cited by 72 (36 self)
 Add to MetaCart
A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A
Injective hilbert space embeddings of probability measures
 In COLT
, 2008
"... A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). The emb ..."
Abstract

Cited by 54 (31 self)
 Add to MetaCart
, defined as kernels for which the RKHS embedding of probability measures is injective. In particular, characteristic kernels can include nonuniversal kernels. We restrict ourselves to translationinvariant kernels on Euclidean space, and define the associated metric on probability measures in terms
Minimum Variance Estimation of a Sparse Vector Within the Linear Gaussian Model: An
"... Abstract — We consider minimum variance estimation within the sparse linear Gaussian model (SLGM). A sparse vector is to be estimated from a linearly transformed version embedded in Gaussian noise. Our analysis is based on the theory of reproducing kernel Hilbert spaces (RKHS). After a characterizat ..."
Abstract
 Add to MetaCart
Abstract — We consider minimum variance estimation within the sparse linear Gaussian model (SLGM). A sparse vector is to be estimated from a linearly transformed version embedded in Gaussian noise. Our analysis is based on the theory of reproducing kernel Hilbert spaces (RKHS). After a
From Data Points to Probability Measures Data Points Dirac Measures Probability Measures
"... Risk Deviation Bound: Given an arbitrary distribution P with finite variance σ 2, a Lipschitz continuous function f: R → R with constantCf, an arbitrary loss functionℓ: R×R → R that is Lipschitz continuous in the second argument with constantCℓ, it follows, for anyy ∈ R, that Ex∼P[ℓ(y,f(x))]−ℓ(y,Ex ..."
Abstract
 Add to MetaCart
learning (data squashing). Hilbert Space Embedding The kernel mean map from a space of distributions P into a reproducing kernel Hilbert space (RKHS) H: µ: P → H, P ↦− → k(x,·)dP(x). The kernelk is said to be characteristic if and only if the mapµis injective, i.e., there is no loss of information