Results 1  10
of
112
Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds
 Journal of Machine Learning Research
, 2003
"... The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. ..."
Abstract

Cited by 276 (10 self)
 Add to MetaCart
The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation.
Face Recognition: A Convolutional Neural Network Approach
 IEEE Transactions on Neural Networks
, 1997
"... Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult [43]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a selforganizing map n ..."
Abstract

Cited by 176 (0 self)
 Add to MetaCart
(Show Context)
Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult [43]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a selforganizing map neural network, and a convolutional neural network. The selforganizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the KarhunenLoeve transform in place of the selforganizing map, and a multilayer perceptron in place of the convolutional netwo...
Charting a Manifold
 Advances in Neural Information Processing Systems 15
, 2003
"... this paper we use m i ( j ) N ( j ; i , s ), with the scale parameter s specifying the expected size of a neighborhood on the manifold in sample space. A reasonable choice is s = r/2, so that 2erf(2) > 99.5% of the density of m i ( j ) is contained in the area around y i where the manifold i ..."
Abstract

Cited by 170 (7 self)
 Add to MetaCart
(Show Context)
this paper we use m i ( j ) N ( j ; i , s ), with the scale parameter s specifying the expected size of a neighborhood on the manifold in sample space. A reasonable choice is s = r/2, so that 2erf(2) > 99.5% of the density of m i ( j ) is contained in the area around y i where the manifold is expected to be locally linear. With uniform p i and i , m i ( j ) and fixed, the MAP estimates of the GMM covariances are S i = m i ( j ) (y j i )(y j i ) # + ( j i )( j i ) # +S j m i ( j ) . (3) Note that each covariance S i is dependent on all other S j . The MAP estimators for all covariances can be arranged into a set of fully constrained linear equations and solved exactly for their mutually optimal values. This key step brings nonlocal information about the manifold's shape into the local description of each neighborhood, ensuring that adjoining neighborhoods have similar covariances and small angles between their respective subspaces. Even if a local subset of data points are dense in a direction perpendicular to the manifold, the prior encourages the local chart to orient parallel to the manifold as part of a globally optimal solution, protecting against a pathology noted in [8]. Equation (3) is easily adapted to give a reduced number of charts and/or charts centered on local centroids. 4 Connecting the charts We now build a connection for set of charts specified as an arbitrary nondegenerate GMM. A GMM gives a soft partitioning of the dataset into neighborhoods of mean k and covariance S k . The optimal variancepreserving lowdimensional coordinate system for each neighborhood derives from its weighted principal component analysis, which is exactly specified by the eigenvectors of its covariance matrix: Eigendecompose V k L k V # k S k with...
Dimension Reduction by Local Principal Component Analysis
, 1997
"... Reducing or eliminating statistical redundancy between the components of highdimensional vector data enables a lowerdimensional representation without significant loss of information. Recognizing the limitations of principal component analysis (PCA), researchers in the statistics and neural networ ..."
Abstract

Cited by 108 (0 self)
 Add to MetaCart
Reducing or eliminating statistical redundancy between the components of highdimensional vector data enables a lowerdimensional representation without significant loss of information. Recognizing the limitations of principal component analysis (PCA), researchers in the statistics and neural network communities have developed nonlinear extensions of PCA. This article develops a local linear approach to dimension reduction that provides accurate representations and is fast to compute. We exercise the algorithms on speech and image data, and compare performance with PCA and with neural network implementations of nonlinear PCA. We find that both nonlinear techniques can provide more accurate representations than PCA and show that the local linear techniques outperform neural network implementations.
Global Coordination of Local Linear Models
 Advances in Neural Information Processing Systems 14
, 2002
"... High dimensional data that lies on or near a low dimensional manifold can be described by a collection of local linear models. Such a description, however, does not provide a global parameterization of the manifoldarguably an important goal of unsupervised learning. In this paper, we show how ..."
Abstract

Cited by 78 (2 self)
 Add to MetaCart
High dimensional data that lies on or near a low dimensional manifold can be described by a collection of local linear models. Such a description, however, does not provide a global parameterization of the manifoldarguably an important goal of unsupervised learning. In this paper, we show how to learn a collection of local linear models that solves this more difficult problem. Our local linear models are represented by a mixture of factor analyzers, and the "global coordination " of these models is achieved by adding a regularizing term to the standard maximum likelihood objective function. The regularizer breaks a degeneracy in the mixture model's parameter space, favoring models whose internal coordinate systems are aligned in a consistent way. As a result, the internal coordinates change smoothly and continuously as one traverses a connected path on the manifoldeven when the path crosses the domains of many different local models. The regularizer takes the form of a KullbackLeibler divergence and illustrates an unexpected application of variational methods: not to perform approximate inference in intractable probabilistic models, but to learn more useful internal representations in tractable ones.
Mapping a manifold of perceptual observations
 Advances in Neural Information Processing Systems 10
, 1998
"... Nonlinear dimensionality reduction is formulated here as the problem of trying to find a Euclidean featurespace embedding of a set of observations that preserves as closely as possible their intrinsic metric structure – the distances between points on the observation manifold as measured along geod ..."
Abstract

Cited by 77 (2 self)
 Add to MetaCart
(Show Context)
Nonlinear dimensionality reduction is formulated here as the problem of trying to find a Euclidean featurespace embedding of a set of observations that preserves as closely as possible their intrinsic metric structure – the distances between points on the observation manifold as measured along geodesic paths. Our isometric feature mapping procedure, or isomap, is able to reliably recover lowdimensional nonlinear structure in realistic perceptual data sets, such as a manifold of face images, where conventional global mapping methods find only local minima. The recovered map provides a canonical set of globally meaningful features, which allows perceptual transformations such as interpolation, extrapolation, and analogy – highly nonlinear transformations in the original observation space – to be computed with simple linear operations in feature space. 1
Fast nonlinear dimension reduction
 In IEEE International Conference on Neural Networks
, 1993
"... We present a fast algorithm for nonlinear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distor ..."
Abstract

Cited by 48 (5 self)
 Add to MetaCart
(Show Context)
We present a fast algorithm for nonlinear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by velayer autoassociative networks. The local linear algorithm is also more than an order of magnitude faster to train. 1
Principal Manifolds and Bayesian Subspaces for Visual Recognition
 International Conference on Computer Vision
, 1999
"... We investigate the use of linear and nonlinear principal manifolds for learning lowdimensional representations for visual recognition. Three techniques: Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Nonlinear PCA (NLPCA) are examined and tested in a visual recognition ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
(Show Context)
We investigate the use of linear and nonlinear principal manifolds for learning lowdimensional representations for visual recognition. Three techniques: Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Nonlinear PCA (NLPCA) are examined and tested in a visual recognition experiment using a large gallery of facial images from the ¨FERET¨database. We compare the recognition performance of a nearestneighbour matching rule with each principal manifold representation to that of a maximum a posteriori (MAP) matching rule using a Bayesian similarity measure derived from probabilistic subspaces and demonstrate the superiority of the latter.
Neural network approaches to image compression
 Proc. IEEE
, 1995
"... Abstract — This paper presents a tutorial overview of neural networks as signal processing tools for image compression. They are well suited to the problem of image compression due to their massively parallel and distributed architecture. Their characteristics are analogous to some of the features o ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents a tutorial overview of neural networks as signal processing tools for image compression. They are well suited to the problem of image compression due to their massively parallel and distributed architecture. Their characteristics are analogous to some of the features of our own visual system, which allow us to process visual information with much ease. For example, multilayer perceptrons can be used as nonlinear predictors in differential pulsecode modulation (DPCM). Such predictors have been shown to increase the predictive gain relative to a linear predictor. Another active area of research is in the application of Hebbian learning to the extraction of principal components, which are the basis vectors for the optimal linear KarhunenLoève transform (KLT). These learning algorithms are iterative, have some computational advantages over standard eigendecomposition techniques, and can be made to adapt to changes in the input signal. Yet another model, the selforganizing feature map (SOFM), has been used with a great deal of success in the design of codebooks for vector quantization (VQ). The resulting codebooks are less sensitive to initial conditions than the standard LBG algorithm, and the topological ordering of the entries can be exploited to further increase coding efficiency and reduce computational complexity. I.
Manifold reconstruction in arbitrary dimensions using witness complexes
 In Proc. 23rd ACM Sympos. on Comput. Geom
, 2007
"... It is a wellestablished fact that the witness complex is closely related to the restricted Delaunay triangulation in low dimensions. Specifically, it has been proved that the witness complex coincides with the restricted Delaunay triangulation on curves, and is still a subset of it on surfaces, und ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
(Show Context)
It is a wellestablished fact that the witness complex is closely related to the restricted Delaunay triangulation in low dimensions. Specifically, it has been proved that the witness complex coincides with the restricted Delaunay triangulation on curves, and is still a subset of it on surfaces, under mild sampling assumptions. Unfortunately, these results do not extend to higherdimensional manifolds, even under stronger sampling conditions. In this paper, we show how the sets of witnesses and landmarks can be enriched, so that the nice relations that exist between both complexes still hold on higherdimensional manifolds. We also use our structural results to devise an algorithm that reconstructs manifolds of any arbitrary dimension or codimension at different scales. The algorithm combines a farthestpoint refinement scheme with a vertex pumping strategy. It is very simple conceptually, and it does not require the input point sample W to be sparse. Its time complexity is bounded by c(d)W  2, where c(d) is a constant depending solely on the dimension d of the ambient space. 1