Results 1  10
of
158
Visualizing Data using tSNE
, 2008
"... We present a new technique called “tSNE” that visualizes highdimensional data by giving each datapoint a location in a two or threedimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly b ..."
Abstract

Cited by 280 (13 self)
 Add to MetaCart
We present a new technique called “tSNE” that visualizes highdimensional data by giving each datapoint a location in a two or threedimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. tSNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for highdimensional data that lie on several different, but related, lowdimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how tSNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of tSNE on a wide variety of data sets and compare it with many other nonparametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by tSNE are significantly better than those produced by the other techniques on almost all of the data sets.
Topology and Data
, 2008
"... An important feature of modern science and engineering is that data of various kinds is being produced at an unprecedented rate. This is so in part because of new experimental methods, and in part because of the increase in the availability of high powered computing technology. It is also clear that ..."
Abstract

Cited by 119 (4 self)
 Add to MetaCart
(Show Context)
An important feature of modern science and engineering is that data of various kinds is being produced at an unprecedented rate. This is so in part because of new experimental methods, and in part because of the increase in the availability of high powered computing technology. It is also clear that the nature of the data
Data fusion and multicue data matching by diffusion maps
 IEEE Transactions on Pattern Analysis and Machine Intelligence
"... Abstract—Data fusion and multicue data matching are fundamental tasks of highdimensional data analysis. In this paper, we apply the recently introduced diffusion framework to address these tasks. Our contribution is threefold: First, we present the LaplaceBeltrami approach for computing density i ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
(Show Context)
Abstract—Data fusion and multicue data matching are fundamental tasks of highdimensional data analysis. In this paper, we apply the recently introduced diffusion framework to address these tasks. Our contribution is threefold: First, we present the LaplaceBeltrami approach for computing density invariant embeddings which are essential for integrating different sources of data. Second, we describe a refinement of the Nyström extension algorithm called “geometric harmonics. ” We also explain how to use this tool for data assimilation. Finally, we introduce a multicue data matching scheme based on nonlinear spectral graphs alignment. The effectiveness of the presented schemes is validated by applying it to the problems of lipreading and image sequence alignment. Index Terms—Pattern matching, graph theory, graph algorithms, Markov processes, machine learning, data mining, image databases. Ç 1
Clustering and Embedding using Commute Times
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
"... This paper exploits the properties of the commute time between nodes of a graph for the purposes of clustering and embedding, and explores its applications to image segmentation and multibody motion tracking. Our starting point is the lazy random walk on the graph, which is determined by the heatke ..."
Abstract

Cited by 56 (5 self)
 Add to MetaCart
(Show Context)
This paper exploits the properties of the commute time between nodes of a graph for the purposes of clustering and embedding, and explores its applications to image segmentation and multibody motion tracking. Our starting point is the lazy random walk on the graph, which is determined by the heatkernel of the graph and can be computed from the spectrum of the graph Laplacian. We characterize the random walk using the commute time (i.e. the expected time taken for a random walk to travel between two nodes and return) and show how this quantity may be computed from the Laplacian spectrum using the discrete Green’s function. Our motivation is that the commute time can be anticipated to be a more robust measure of the proximity of data than the raw proximity matrix. In this paper, we explore two applications of the commute time. The first is to develop a method for image segmentation using the eigenvector corresponding to the smallest eigenvalue of the commute time matrix. We show that our commute time segmentation method has the property of enhancing the intragroup coherence while weakening intergroup coherence and is superior to the normalized cut. The second application is to develop a robust multibody motion tracking method using an embedding based on the commute time. Our embedding procedure preserves commute time, and is closely akin to kernel PCA, the Laplacian eigenmap and the diffusion map. We illustrate the results both on synthetic image sequences and real world video sequences, and compare our results with several alternative methods.
Dimensionality Reduction: A Comparative Review
, 2008
"... In recent years, a variety of nonlinear dimensionality reduction techniques have been proposed, many of which rely on the evaluation of local properties of the data. The paper presents a review and systematic comparison of these techniques. The performances of the techniques are investigated on arti ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
(Show Context)
In recent years, a variety of nonlinear dimensionality reduction techniques have been proposed, many of which rely on the evaluation of local properties of the data. The paper presents a review and systematic comparison of these techniques. The performances of the techniques are investigated on artificial and natural tasks. The results of the experiments reveal that nonlinear techniques perform well on selected artificial tasks, but do not outperform the traditional PCA on realworld tasks. The paper explains these results by identifying weaknesses of current nonlinear techniques, and suggests how the performance of nonlinear dimensionality reduction techniques may be improved.
Riemannian manifold learning
 IEEE Trans. Pattern Anal. Mach. Intell
, 2008
"... Abstract—Recently, manifold learning has beenwidely exploited in pattern recognition, data analysis, andmachine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input highdimensional data lie on an intrinsically lowdimensi ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Recently, manifold learning has beenwidely exploited in pattern recognition, data analysis, andmachine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input highdimensional data lie on an intrinsically lowdimensional Riemannian manifold. The main idea is to formulate the dimensionality reduction problem as a classical problem in Riemannian geometry, that is, how to construct coordinate charts for a given Riemannian manifold? We implement the Riemannian normal coordinate chart, which has been the most widely used in Riemannian geometry, for a set of unorganized data points. First, two input parameters (the neighborhood size k and the intrinsic dimension d) are estimated based on an efficient simplicial reconstruction of the underlying manifold. Then, the normal coordinates are computed to map the input highdimensional data into a lowdimensional space. Experiments on synthetic data, as well as realworld images, demonstrate that our algorithm can learn intrinsic geometric structures of the data, preserve radial geodesic distances, and yield regular embeddings.
Power Iteration Clustering
"... We show that the power iteration, typically used to approximate the dominant eigenvector of a matrix, can be applied to a normalized affinity matrix to create a onedimensional embedding of the underlying data. This embedding is then used, as in spectral clustering, to cluster the data via kmeans. ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
We show that the power iteration, typically used to approximate the dominant eigenvector of a matrix, can be applied to a normalized affinity matrix to create a onedimensional embedding of the underlying data. This embedding is then used, as in spectral clustering, to cluster the data via kmeans. We demonstrate this method’s effectiveness and scalability on several synthetic and real datasets, and conclude that to find a meaningful lowdimensional embedding for clustering, it is not necessary to find any eigenvectors—we just need a linear combination of the top eigenvectors. 1
Shape priors using Manifold Learning Techniques
 in &quot;11th IEEE International Conference on Computer Vision, Rio de Janeiro
, 2007
"... We introduce a nonlinear shape prior for the deformable model framework that we learn from a set of shape samples using recent manifold learning techniques. We model a category of shapes as a finite dimensional manifold which we approximate using Diffusion maps, that we call the shape prior manifol ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
We introduce a nonlinear shape prior for the deformable model framework that we learn from a set of shape samples using recent manifold learning techniques. We model a category of shapes as a finite dimensional manifold which we approximate using Diffusion maps, that we call the shape prior manifold. Our method computes a Delaunay triangulation of the reduced space, considered as Euclidean, and uses the resulting space partition to identify the closest neighbors of any given shape based on its Nyström extension. Our contribution lies in three aspects. First, we propose a solution to the preimage problem and define the projection of a shape onto the manifold. Based on closest neighbors for the Diffusion distance, we then describe a variational framework for manifold denoising. Finally, we introduce a shape prior term for the deformable framework through a nonlinear energy term designed to attract a shape towards the manifold at given constant embedding. Results on shapes of cars and ventricule nuclei are presented and demonstrate the potentials of our method.
Learning Semantic Visual Vocabularies Using Diffusion Distance
"... In this paper, we propose a novel approach for learning generic visual vocabulary. We use diffusion maps to automatically learn a semantic visual vocabulary from abundant quantized midlevel features. Each midlevel feature is represented by the vector of pointwise mutual information (PMI). In this mi ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a novel approach for learning generic visual vocabulary. We use diffusion maps to automatically learn a semantic visual vocabulary from abundant quantized midlevel features. Each midlevel feature is represented by the vector of pointwise mutual information (PMI). In this midlevel feature space, we believe the features produced by similar sources must lie on a certain manifold. To capture the intrinsic geometric relations between features, we measure their dissimilarity using diffusion distance. The underlying idea is to embed the midlevel features into a semantic lowerdimensional space. Our goal is to construct a compact yet discriminative semantic visual vocabulary. Although the conventional approach using kmeans is good for vocabulary construction, its performance is sensitive to the size of the visual vocabulary. In addition, the learnt visual words are not semantically meaningful since the clustering criterion is based on appearance similarity only. Our proposed approach can effectively overcome these problems by capturing the semantic and geometric relations of the feature space using diffusion maps. Unlike some of the supervised vocabulary construction approaches, and the unsupervised methods such as pLSA and LDA, diffusion maps can capture the local intrinsic geometric relations between the midlevel feature points on the manifold. We have tested our approach on the KTH action dataset, our own YouTube action dataset and the fifteen scene dataset, and have obtained very promising results.