Results 1  10
of
116,655
Hilbert Space Embeddings and Metrics on Probability Measures
"... A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseu ..."
Abstract

Cited by 65 (34 self)
 Add to MetaCart
A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A
Hilbert Space Embedding and Characteristic Kernels Hilbert Space Embeddings and Metrics on Probability Measures
"... ar ..."
Searching in metric spaces
, 2001
"... The problem of searching the elements of a set that are close to a given query element under some similarity criterion has a vast number of applications in many branches of computer science, from pattern recognition to textual and multimedia information retrieval. We are interested in the rather gen ..."
Abstract

Cited by 432 (38 self)
 Add to MetaCart
general case where the similarity criterion defines a metric space, instead of the more restricted case of a vector space. Many solutions have been proposed in different areas, in many cases without crossknowledge. Because of this, the same ideas have been reconceived several times, and very different
Distance Metric Learning, With Application To Clustering With SideInformation
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 15
, 2003
"... Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many "plausible" ways, and if a clustering algorithm such as Kmeans initially fails to find one that is meaningful to a user, the only recourse may be for the us ..."
Abstract

Cited by 799 (14 self)
 Add to MetaCart
Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many "plausible" ways, and if a clustering algorithm such as Kmeans initially fails to find one that is meaningful to a user, the only recourse may
Measuring ISP Topologies with Rocketfuel
 In Proc. ACM SIGCOMM
, 2002
"... To date, realistic ISP topologies have not been accessible to the research community, leaving work that depends on topology on an uncertain footing. In this paper, we present new Internet mapping techniques that have enabled us to directly measure routerlevel ISP topologies. Our techniques reduce t ..."
Abstract

Cited by 838 (30 self)
 Add to MetaCart
To date, realistic ISP topologies have not been accessible to the research community, leaving work that depends on topology on an uncertain footing. In this paper, we present new Internet mapping techniques that have enabled us to directly measure routerlevel ISP topologies. Our techniques reduce
Learning the Kernel Matrix with SemiDefinite Programming
, 2002
"... Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract

Cited by 780 (22 self)
 Add to MetaCart
Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information
Probabilistic Roadmaps for Path Planning in HighDimensional Configuration Spaces
 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION
, 1996
"... A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase. In the learning phase, a probabilistic roadmap is constructed and stored as a graph whose nodes correspond to collisionfree configurations and whose edg ..."
Abstract

Cited by 1276 (124 self)
 Add to MetaCart
A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase. In the learning phase, a probabilistic roadmap is constructed and stored as a graph whose nodes correspond to collisionfree configurations and whose edges correspond to feasible paths between these configurations. These paths are computed using a simple and fast local planner. In the query phase, any given start and goal configurations of the robot are connected to two nodes of the roadmap; the roadmap is then searched for a path joining these two nodes. The method is general and easy to implement. It can be applied to virtually any type of holonomic robot. It requires selecting certain parameters (e.g., the duration of the learning phase) whose values depend on the scene, that is the robot and its workspace. But these values turn out to be relatively easy to choose, Increased efficiency can also be achieved by tailoring some components of the method (e.g., the local planner) to the considered robots. In this paper the method is applied to planar articulated robots with many degrees of freedom. Experimental results show that path planning can be done in a fraction of a second on a contemporary workstation (=150 MIPS), after learning for relatively short periods of time (a few dozen seconds)
On the geometry of metric measure spaces
 II, ACTA MATH
, 2004
"... We introduce and analyze lower (’Ricci’) curvature bounds Curv(M, d,m) ≥ K for metric measure spaces (M, d,m). Our definition is based on convexity properties of the relative entropy Ent(.m) regarded as a function on the L2Wasserstein space of probability measures on the metric space (M, d). Amo ..."
Abstract

Cited by 248 (10 self)
 Add to MetaCart
We introduce and analyze lower (’Ricci’) curvature bounds Curv(M, d,m) ≥ K for metric measure spaces (M, d,m). Our definition is based on convexity properties of the relative entropy Ent(.m) regarded as a function on the L2Wasserstein space of probability measures on the metric space (M, d
A metric for distributions with applications to image databases
, 1998
"... We introduce a new distance between two distributions that we call the Earth Mover’s Distance (EMD), which reflects the minimal amount of work that must be performed to transform one distributioninto the other by moving “distribution mass ” around. This is a special case of the transportation proble ..."
Abstract

Cited by 434 (6 self)
 Add to MetaCart
problem from linear optimization, for which efficient algorithms are available. The EMD also allows for partial matching. When used to compare distributions that have the same overall mass, the EMD is a true metric, and has easytocompute lower bounds. In this paper we focus on applications to image
The large N limit of superconformal field theories and supergravity
, 1998
"... We show that the large N limit of certain conformal field theories in various dimensions include in their Hilbert space a sector describing supergravity on the product of AntideSitter spacetimes, spheres and other compact manifolds. This is shown by taking some branes in the full M/string theory and ..."
Abstract

Cited by 5673 (21 self)
 Add to MetaCart
We show that the large N limit of certain conformal field theories in various dimensions include in their Hilbert space a sector describing supergravity on the product of AntideSitter spacetimes, spheres and other compact manifolds. This is shown by taking some branes in the full M/string theory
Results 1  10
of
116,655