Results 1  10
of
174
Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
 Neural Computation
, 2003
"... Abstract One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low dimensional manifold embedded in a high dimensional space. Drawing on the corr ..."
Abstract

Cited by 1205 (16 self)
 Add to MetaCart
Abstract One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low dimensional manifold embedded in a high dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed. 1 Introduction In many areas of artificial intelligence, information retrieval and data mining, one is often confronted with intrinsically low dimensional data lying in a very high dimensional space. Consider, for example, gray scale images of an object taken under fixed lighting conditions with a moving camera. Each such image would typically be represented by a brightness value at each pixel. If there were n 2
Towards a theoretical foundation for Laplacianbased manifold methods
, 2005
"... Abstract. In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed “manifoldmotivated ” as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between t ..."
Abstract

Cited by 158 (11 self)
 Add to MetaCart
(Show Context)
Abstract. In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed “manifoldmotivated ” as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between theory and practice for a class of Laplacianbased manifold methods. We show that under certain conditions the graph Laplacian of a point cloud converges to the LaplaceBeltrami operator on the underlying manifold. Theorem 1 contains the first result showing convergence of a random graph Laplacian to manifold Laplacian in the machine learning context. 1
Diffusion Kernels on Statistical Manifolds
, 2004
"... A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold defined by the Fisher information metric associated with a statistical family, and generalize the Gaussian ker ..."
Abstract

Cited by 116 (8 self)
 Add to MetaCart
(Show Context)
A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold defined by the Fisher information metric associated with a statistical family, and generalize the Gaussian kernel of Euclidean space. As an important special case, kernels based on the geometry of multinomial families are derived, leading to kernelbased learning algorithms that apply naturally to discrete data. Bounds on covering numbers and Rademacher averages for the kernels are proved using bounds on the eigenvalues of the Laplacian on Riemannian manifolds. Experimental results are presented for document classification, for which the use of multinomial geometry is natural and well motivated, and improvements are obtained over the standard use of Gaussian or linear kernels, which have been the standard for text classification.
Protovalue functions: A laplacian framework for learning representation and control in markov decision processes
 Journal of Machine Learning Research
, 2006
"... This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by d ..."
Abstract

Cited by 92 (11 self)
 Add to MetaCart
(Show Context)
This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called protovalue functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A threephased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using leastsquares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nyström extension for outofsample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end.
Local discriminant embedding and its variants
 in Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2005
"... We present a new approach, called local discriminant embedding (LDE), to manifold learning and pattern classification. In our framework, the neighbor and class relations of data are used to construct the embedding for classification problems. The proposed algorithm learns the embedding for the subma ..."
Abstract

Cited by 85 (1 self)
 Add to MetaCart
(Show Context)
We present a new approach, called local discriminant embedding (LDE), to manifold learning and pattern classification. In our framework, the neighbor and class relations of data are used to construct the embedding for classification problems. The proposed algorithm learns the embedding for the submanifold of each class by solving an optimization problem. After being embedded into a lowdimensional subspace, data points of the same class maintain their intrinsic neighbor relations, whereas neighboring points of different classes no longer stick to one another. Via embedding, new test data are thus more reliably classified by the nearest neighbor rule, owing to the locally discriminating nature. We also describe two useful variants: twodimensional LDE and kernel LDE. Comprehensive comparisons and extensive experiments on face recognition are included to demonstrate the effectiveness of our method. 1.
Anisotropic Diffusion of Surfaces and Functions on Surfaces
, 2003
"... We present a unified anisotropic geometric diffusion PDE model for smoothing (fairing) out noise both in triangulated twomanifold surface meshes in R³ and functions defined on these surface meshes, while enhancing curve features on both by careful choice of an anisotropic diffusion tensor. We combin ..."
Abstract

Cited by 78 (7 self)
 Add to MetaCart
We present a unified anisotropic geometric diffusion PDE model for smoothing (fairing) out noise both in triangulated twomanifold surface meshes in R³ and functions defined on these surface meshes, while enhancing curve features on both by careful choice of an anisotropic diffusion tensor. We combine the C¹ limit representation of Loop’s subdivision for triangular surface meshes and vector functions on the surface mesh with the established diffusion model to arrive at a discretized version of the diffusion problem in the spatial direction. The time direction discretization then leads to a sparse linear system of equations. Iteratively solving the sparse linear system yields a sequence of faired (smoothed) meshes as well as faired functions.
Geometry and curvature of diffeomorphism groups with H¹ metric and mean hydrodynamics
, 1998
"... In [HMR1], Holm, Marsden, and Ratiu derived a new model for the mean motion of an ideal fluid in Euclidean space given by the equation ˙V (t)+∇U(t)V (t)−α2 [∇U(t)] t ·△U(t) = −grad p(t) where divU = 0, and V = (1 − α2△)U. In this model, the momentum V is transported by the velocity U, with the eff ..."
Abstract

Cited by 60 (15 self)
 Add to MetaCart
In [HMR1], Holm, Marsden, and Ratiu derived a new model for the mean motion of an ideal fluid in Euclidean space given by the equation ˙V (t)+∇U(t)V (t)−α2 [∇U(t)] t ·△U(t) = −grad p(t) where divU = 0, and V = (1 − α2△)U. In this model, the momentum V is transported by the velocity U, with the effect that nonlinear interaction between modes corresponding to length scales smaller than α is negligible. We generalize this equation to the setting of an n dimensional compact Riemannian manifold. The resulting equation is the EulerPoincaré equation associated with the geodesic flow of the H1 right invariant metric on Ds µ, the group of volume preserving Hilbert diffeomorphisms of class Hs. We prove that the geodesic spray is continuously differentiable from T Ds µ(M) into TTD s µ(M) so that a standard Picard iteration argument proves existence and uniqueness on a finite time interval. Our goal in this paper is to establish the foundations for Lagrangian stability analysis following Arnold [A]. To do so, we use submanifold geometry, and prove that the weak curvature tensor of the right invariant H1 metric on Ds µ is a bounded trilinear map in the Hs topology, from which it follows that solutions to Jacobi’s equation exist. Using such solutions, we are able to study the infinitesimal stability behavior of geodesics.
Discrete Laplace operators: No free lunch
, 2007
"... Discrete Laplace operators are ubiquitous in applications spanning geometric modeling to simulation. For robustness and efficiency, many applications require discrete operators that retain key structural properties inherent to the continuous setting. Building on the smooth setting, we present a set ..."
Abstract

Cited by 59 (1 self)
 Add to MetaCart
Discrete Laplace operators are ubiquitous in applications spanning geometric modeling to simulation. For robustness and efficiency, many applications require discrete operators that retain key structural properties inherent to the continuous setting. Building on the smooth setting, we present a set of natural properties for discrete Laplace operators for triangular surface meshes. We prove an important theoretical limitation: discrete Laplacians cannot satisfy all natural properties; retroactively, this explains the diversity of existing discrete Laplace operators. Finally, we present a family of operators that includes and extends wellknown and widelyused operators.
Adaptive numerical treatment of elliptic systems on manifolds
 Advances in Computational Mathematics, 15(1):139
, 2001
"... ABSTRACT. Adaptive multilevel finite element methods are developed and analyzed for certain elliptic systems arising in geometric analysis and general relativity. This class of nonlinear elliptic systems of tensor equations on manifolds is first reviewed, and then adaptive multilevel finite element ..."
Abstract

Cited by 57 (26 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Adaptive multilevel finite element methods are developed and analyzed for certain elliptic systems arising in geometric analysis and general relativity. This class of nonlinear elliptic systems of tensor equations on manifolds is first reviewed, and then adaptive multilevel finite element methods for approximating solutions to this class of problems are considered in some detail. Two a posteriori error indicators are derived, based on local residuals and on global linearized adjoint or dual problems. The design of Manifold Code (MC) is then discussed; MC is an adaptive multilevel finite element software package for 2 and 3manifolds developed over several years at Caltech and UC San Diego. It employs a posteriori error estimation, adaptive simplex subdivision, unstructured algebraic multilevel methods, global inexact Newton methods, and numerical continuation methods for the numerical solution of nonlinear covariant elliptic systems on 2 and 3manifolds. Some of the more interesting features of MC are described in detail, including some new ideas for topology and geometry representation in simplex meshes, and an unusual partition of unitybased method for exploiting parallel computers. A short example is then given which involves the Hamiltonian and momentum constraints in the Einstein equations, a representative nonlinear 4component covariant elliptic system on a Riemannian 3manifold which arises in general relativity. A number of operator properties and solvability results recently established are first summarized, making possible two quasioptimal a priori error estimates for Galerkin approximations which are then derived. These two results complete the theoretical framework for effective use of adaptive multilevel finite element methods. A sample calculation using the MC software is then presented.
Convergence of laplacian eigenmaps
 In NIPS
, 2006
"... Geometrically based methods for various tasks of machine learning have attracted considerable attention over the last few years. In this paper we show convergence of eigenvectors of the point cloud Laplacian to the eigenfunctions of the LaplaceBeltrami operator on the underlying manifold, thus esta ..."
Abstract

Cited by 46 (3 self)
 Add to MetaCart
(Show Context)
Geometrically based methods for various tasks of machine learning have attracted considerable attention over the last few years. In this paper we show convergence of eigenvectors of the point cloud Laplacian to the eigenfunctions of the LaplaceBeltrami operator on the underlying manifold, thus establishing the first convergence results for a spectral dimensionality reduction algorithm in the manifold setting. 1