Results 1  10
of
91
Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
 Neural Computation
, 2003
"... Abstract One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low dimensional manifold embedded in a high dimensional space. Drawing on the corr ..."
Abstract

Cited by 734 (15 self)
 Add to MetaCart
Abstract One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low dimensional manifold embedded in a high dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed. 1 Introduction In many areas of artificial intelligence, information retrieval and data mining, one is often confronted with intrinsically low dimensional data lying in a very high dimensional space. Consider, for example, gray scale images of an object taken under fixed lighting conditions with a moving camera. Each such image would typically be represented by a brightness value at each pixel. If there were n 2
Towards a theoretical foundation for Laplacianbased manifold methods
, 2005
"... Abstract. In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed “manifoldmotivated ” as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between t ..."
Abstract

Cited by 103 (10 self)
 Add to MetaCart
Abstract. In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed “manifoldmotivated ” as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between theory and practice for a class of Laplacianbased manifold methods. We show that under certain conditions the graph Laplacian of a point cloud converges to the LaplaceBeltrami operator on the underlying manifold. Theorem 1 contains the first result showing convergence of a random graph Laplacian to manifold Laplacian in the machine learning context. 1
Diffusion Kernels on Statistical Manifolds
, 2004
"... A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold defined by the Fisher information metric associated with a statistical family, and generalize the Gaussian ker ..."
Abstract

Cited by 87 (6 self)
 Add to MetaCart
A family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. The kernels are based on the heat equation on the Riemannian manifold defined by the Fisher information metric associated with a statistical family, and generalize the Gaussian kernel of Euclidean space. As an important special case, kernels based on the geometry of multinomial families are derived, leading to kernelbased learning algorithms that apply naturally to discrete data. Bounds on covering numbers and Rademacher averages for the kernels are proved using bounds on the eigenvalues of the Laplacian on Riemannian manifolds. Experimental results are presented for document classification, for which the use of multinomial geometry is natural and well motivated, and improvements are obtained over the standard use of Gaussian or linear kernels, which have been the standard for text classification.
Protovalue functions: A laplacian framework for learning representation and control in markov decision processes
 Journal of Machine Learning Research
, 2006
"... This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by d ..."
Abstract

Cited by 66 (10 self)
 Add to MetaCart
This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called protovalue functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A threephased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using leastsquares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nyström extension for outofsample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end.
Anisotropic Diffusion of Surfaces and Functions on Surfaces
, 2003
"... We present a unified anisotropic geometric diffusion PDE model for smoothing (fairing) out noise both in triangulated twomanifold surface meshes in R³ and functions defined on these surface meshes, while enhancing curve features on both by careful choice of an anisotropic diffusion tensor. We combin ..."
Abstract

Cited by 64 (5 self)
 Add to MetaCart
We present a unified anisotropic geometric diffusion PDE model for smoothing (fairing) out noise both in triangulated twomanifold surface meshes in R³ and functions defined on these surface meshes, while enhancing curve features on both by careful choice of an anisotropic diffusion tensor. We combine the C¹ limit representation of Loop’s subdivision for triangular surface meshes and vector functions on the surface mesh with the established diffusion model to arrive at a discretized version of the diffusion problem in the spatial direction. The time direction discretization then leads to a sparse linear system of equations. Iteratively solving the sparse linear system yields a sequence of faired (smoothed) meshes as well as faired functions.
Adaptive numerical treatment of elliptic systems on manifolds
 Advances in Computational Mathematics, 15(1):139
, 2001
"... ABSTRACT. Adaptive multilevel finite element methods are developed and analyzed for certain elliptic systems arising in geometric analysis and general relativity. This class of nonlinear elliptic systems of tensor equations on manifolds is first reviewed, and then adaptive multilevel finite element ..."
Abstract

Cited by 41 (24 self)
 Add to MetaCart
ABSTRACT. Adaptive multilevel finite element methods are developed and analyzed for certain elliptic systems arising in geometric analysis and general relativity. This class of nonlinear elliptic systems of tensor equations on manifolds is first reviewed, and then adaptive multilevel finite element methods for approximating solutions to this class of problems are considered in some detail. Two a posteriori error indicators are derived, based on local residuals and on global linearized adjoint or dual problems. The design of Manifold Code (MC) is then discussed; MC is an adaptive multilevel finite element software package for 2 and 3manifolds developed over several years at Caltech and UC San Diego. It employs a posteriori error estimation, adaptive simplex subdivision, unstructured algebraic multilevel methods, global inexact Newton methods, and numerical continuation methods for the numerical solution of nonlinear covariant elliptic systems on 2 and 3manifolds. Some of the more interesting features of MC are described in detail, including some new ideas for topology and geometry representation in simplex meshes, and an unusual partition of unitybased method for exploiting parallel computers. A short example is then given which involves the Hamiltonian and momentum constraints in the Einstein equations, a representative nonlinear 4component covariant elliptic system on a Riemannian 3manifold which arises in general relativity. A number of operator properties and solvability results recently established are first summarized, making possible two quasioptimal a priori error estimates for Galerkin approximations which are then derived. These two results complete the theoretical framework for effective use of adaptive multilevel finite element methods. A sample calculation using the MC software is then presented.
Local discriminant embedding and its variants
 in Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2005
"... We present a new approach, called local discriminant embedding (LDE), to manifold learning and pattern classification. In our framework, the neighbor and class relations of data are used to construct the embedding for classification problems. The proposed algorithm learns the embedding for the subma ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
We present a new approach, called local discriminant embedding (LDE), to manifold learning and pattern classification. In our framework, the neighbor and class relations of data are used to construct the embedding for classification problems. The proposed algorithm learns the embedding for the submanifold of each class by solving an optimization problem. After being embedded into a lowdimensional subspace, data points of the same class maintain their intrinsic neighbor relations, whereas neighboring points of different classes no longer stick to one another. Via embedding, new test data are thus more reliably classified by the nearest neighbor rule, owing to the locally discriminating nature. We also describe two useful variants: twodimensional LDE and kernel LDE. Comprehensive comparisons and extensive experiments on face recognition are included to demonstrate the effectiveness of our method. 1.
Protovalue functions: Developmental reinforcement learning
 In Proceedings of the International Conference on Machine Learning
, 2005
"... This paper presents a novel framework called protoreinforcement learning (PRL), based on a mathematical model of a protovalue function: these are taskindependent basis functions that form the building blocks of all value functions on a given state space manifold. Protovalue functions are learned ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
This paper presents a novel framework called protoreinforcement learning (PRL), based on a mathematical model of a protovalue function: these are taskindependent basis functions that form the building blocks of all value functions on a given state space manifold. Protovalue functions are learned not from rewards, but instead from analyzing the topology of the state space. Formally, protovalue functions are Fourier eigenfunctions of the LaplaceBeltrami diffusion operator on the state space manifold. Protovalue functions facilitate structural decomposition of large state spaces, and form geodesically smooth orthonormal basis functions for approximating any value function. The theoretical basis for protovalue functions combines insights from spectral graph theory, harmonic analysis, and Riemannian manifolds. Protovalue functions enable a novel generation of algorithms called representation policy iteration, unifying the learning of representation and behavior.
Geometry and curvature of diffeomorphism groups with H¹ metric and mean hydrodynamics
, 1998
"... In [HMR1], Holm, Marsden, and Ratiu derived a new model for the mean motion of an ideal fluid in Euclidean space given by the equation ˙V (t)+∇U(t)V (t)−α2 [∇U(t)] t ·△U(t) = −grad p(t) where divU = 0, and V = (1 − α2△)U. In this model, the momentum V is transported by the velocity U, with the eff ..."
Abstract

Cited by 37 (13 self)
 Add to MetaCart
In [HMR1], Holm, Marsden, and Ratiu derived a new model for the mean motion of an ideal fluid in Euclidean space given by the equation ˙V (t)+∇U(t)V (t)−α2 [∇U(t)] t ·△U(t) = −grad p(t) where divU = 0, and V = (1 − α2△)U. In this model, the momentum V is transported by the velocity U, with the effect that nonlinear interaction between modes corresponding to length scales smaller than α is negligible. We generalize this equation to the setting of an n dimensional compact Riemannian manifold. The resulting equation is the EulerPoincaré equation associated with the geodesic flow of the H1 right invariant metric on Ds µ, the group of volume preserving Hilbert diffeomorphisms of class Hs. We prove that the geodesic spray is continuously differentiable from T Ds µ(M) into TTD s µ(M) so that a standard Picard iteration argument proves existence and uniqueness on a finite time interval. Our goal in this paper is to establish the foundations for Lagrangian stability analysis following Arnold [A]. To do so, we use submanifold geometry, and prove that the weak curvature tensor of the right invariant H1 metric on Ds µ is a bounded trilinear map in the Hs topology, from which it follows that solutions to Jacobi’s equation exist. Using such solutions, we are able to study the infinitesimal stability behavior of geodesics.
Convergent Discrete LaplaceBeltrami Operators over Triangular Surfaces
 Institute of Computational Mathematics, Chinese Academy of Sciences
, 2004
"... The convergence property of the discrete LaplaceBeltrami operators is the foundation of convergence analysis of the numerical simulation process of some geometric partial differential equations which involve the operator. In this paper we propose several simple discretization schemes of LaplaceBel ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
The convergence property of the discrete LaplaceBeltrami operators is the foundation of convergence analysis of the numerical simulation process of some geometric partial differential equations which involve the operator. In this paper we propose several simple discretization schemes of LaplaceBeltrami operators over triangulated surfaces. Convergence results for these discrete LaplaceBeltrami operators are established under various conditions. Numerical results that support the theoretical analysis are given. Application examples of the proposed discrete LaplaceBeltrami operators in surface processing and modelling are also presented.