Results 1  10
of
94
Hilbert Space Embeddings and Metrics on Probability Measures
"... A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseu ..."
Abstract

Cited by 72 (36 self)
 Add to MetaCart
A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A
Hilbert Space Embedding and Characteristic Kernels Hilbert Space Embeddings and Metrics on Probability Measures
"... ar ..."
Injective hilbert space embeddings of probability measures
 In COLT
, 2008
"... A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). The emb ..."
Abstract

Cited by 54 (31 self)
 Add to MetaCart
A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS
Measured descent: A new embedding method for finite metrics
 In Proc. 45th FOCS
, 2004
"... We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for ..."
Abstract

Cited by 92 (29 self)
 Add to MetaCart
We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings
Metrics on diagram groups and uniform embeddings in a Hilbert space
, 2006
"... We give first examples of finitely generated groups having an intermediate, with values in (0,1), Hilbert space compression (which is a numerical parameter measuring the distortion required to embed a metric space into Hilbert space). These groups include certain diagram groups. In particular, we sh ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
We give first examples of finitely generated groups having an intermediate, with values in (0,1), Hilbert space compression (which is a numerical parameter measuring the distortion required to embed a metric space into Hilbert space). These groups include certain diagram groups. In particular, we
Learning probability measures with respect to optimal transport metrics
 In Adv. in Neural Infor. Proc. Systems 25
, 2012
"... We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space. By establishing a precise connection between optimal transport metrics, optimal quantization, and learning theory, we derive new probabilist ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study the problem of estimating, in the sense of optimal transport metrics, a measure which is assumed supported on a manifold embedded in a Hilbert space. By establishing a precise connection between optimal transport metrics, optimal quantization, and learning theory, we derive new
Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions
"... Embeddings of probability measures into reproducing kernel Hilbert spaces have been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over man ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
Embeddings of probability measures into reproducing kernel Hilbert spaces have been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over
Convergence in distribution of random metric measure spaces (Λcoalescent measure trees)
, 2007
"... We consider the space of complete and separable metric spaces which are equipped with a probability measure. A notion of convergence is given based on the philosophy that a sequence of metric measure spaces converges if and only if all finite subspaces sampled from these spaces converge. This topol ..."
Abstract

Cited by 52 (10 self)
 Add to MetaCart
. This topology is metrized following Gromov’s idea of embedding two metric spaces isometrically into a common metric space combined with the Prohorov metric between probability measures on a fixed metric space. We show that for this topology convergence in distribution follows provided the sequence is tight
Coarse embeddings into a Hilbert space, Haagerup property and Poincaré inequalities
 J. Topol. Anal
"... Abstract We prove that a metric space does not coarsely embed into a Hilbert space if and only if it satisfies a sequence of Poincaré inequalities, which can be formulated in terms of (generalized) expanders. We also give quantitative statements, relative to the compression. In the equivariant cont ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract We prove that a metric space does not coarsely embed into a Hilbert space if and only if it satisfies a sequence of Poincaré inequalities, which can be formulated in terms of (generalized) expanders. We also give quantitative statements, relative to the compression. In the equivariant
Metric Embedding for Kernel Classification Rules
"... In this paper, we consider a smoothing kernel based classification rule and propose an algorithm for optimizing the performance of the rule by learning the bandwidth of the smoothing kernel along with a datadependent distance metric. The datadependent distance metric is obtained by learning a func ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
function that embeds an arbitrary metric space into a Euclidean space while minimizing an upper bound on the resubstitution estimate of the error probability of the kernel classification rule. By restricting this embedding function to a reproducing kernel Hilbert space, we reduce the problem to solving a
Results 1  10
of
94