Results 1  10
of
10
Charting a Manifold
 Advances in Neural Information Processing Systems 15
, 2003
"... this paper we use m i ( j ) N ( j ; i , s ), with the scale parameter s specifying the expected size of a neighborhood on the manifold in sample space. A reasonable choice is s = r/2, so that 2erf(2) > 99.5% of the density of m i ( j ) is contained in the area around y i where the manifold is e ..."
Abstract

Cited by 161 (7 self)
 Add to MetaCart
this paper we use m i ( j ) N ( j ; i , s ), with the scale parameter s specifying the expected size of a neighborhood on the manifold in sample space. A reasonable choice is s = r/2, so that 2erf(2) > 99.5% of the density of m i ( j ) is contained in the area around y i where the manifold is expected to be locally linear. With uniform p i and i , m i ( j ) and fixed, the MAP estimates of the GMM covariances are S i = m i ( j ) (y j i )(y j i ) # + ( j i )( j i ) # +S j m i ( j ) . (3) Note that each covariance S i is dependent on all other S j . The MAP estimators for all covariances can be arranged into a set of fully constrained linear equations and solved exactly for their mutually optimal values. This key step brings nonlocal information about the manifold's shape into the local description of each neighborhood, ensuring that adjoining neighborhoods have similar covariances and small angles between their respective subspaces. Even if a local subset of data points are dense in a direction perpendicular to the manifold, the prior encourages the local chart to orient parallel to the manifold as part of a globally optimal solution, protecting against a pathology noted in [8]. Equation (3) is easily adapted to give a reduced number of charts and/or charts centered on local centroids. 4 Connecting the charts We now build a connection for set of charts specified as an arbitrary nondegenerate GMM. A GMM gives a soft partitioning of the dataset into neighborhoods of mean k and covariance S k . The optimal variancepreserving lowdimensional coordinate system for each neighborhood derives from its weighted principal component analysis, which is exactly specified by the eigenvectors of its covariance matrix: Eigendecompose V k L k V # k S k with...
A DataDriven Reflectance Model
 ACM TRANSACTIONS ON GRAPHICS
, 2003
"... We present a generative model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data. Instead of using analytical reflectance models, we represent each BRDF as a dense set of measurements. This allows us to interpolate and extrapolate in the space o ..."
Abstract

Cited by 143 (6 self)
 Add to MetaCart
We present a generative model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data. Instead of using analytical reflectance models, we represent each BRDF as a dense set of measurements. This allows us to interpolate and extrapolate in the space of acquired BRDFs to create new BRDFs. We treat each acquired BRDF as a single highdimensional vector taken from a space of all possible BRDFs. We apply both linear (subspace) and nonlinear (manifold) dimensionality reduction tools in an effort to discover a lowerdimensional representation that characterizes our measurements. We let users define perceptually meaningful parametrization directions to navigate in the reduceddimension BRDF space. On the lowdimensional manifold, movement along these directions produces novel but valid BRDFs.
Principal manifolds and nonlinear dimensionality reduction via tangent space alignment zhenyue zhang, hongyuan zha
 SIAM Journal on Scientific Computing
, 2004
"... Abstract. Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unor ..."
Abstract

Cited by 136 (8 self)
 Add to MetaCart
Abstract. Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of secondorder accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64by64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.
Graph embedding and extension: A general framework for dimensionality reduction
 IEEE Trans. Pattern Anal. Mach. Intell
, 2007
"... Abstract—Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in t ..."
Abstract

Cited by 87 (12 self)
 Add to MetaCart
Abstract—Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions. Index Terms—Dimensionality reduction, manifold learning, subspace learning, graph embedding framework. 1
Graph Embedding: A General Framework for Dimensionality Reduction
 In Proc. 2005 Internal Conference on Computer Vision and Pattern Recognition
, 2005
"... In the last decades, a large family of algorithms ─ supervised or unsupervised; stemming from statistic or geometry theory─have been proposed to provide different solutions to the problem of dimensionality reduction. In this paper, beyond the different motivations of these algorithms, we propose a g ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
In the last decades, a large family of algorithms ─ supervised or unsupervised; stemming from statistic or geometry theory─have been proposed to provide different solutions to the problem of dimensionality reduction. In this paper, beyond the different motivations of these algorithms, we propose a general framework, graph embedding along with its linearization and kernelization, which in theory reveals the underlying objective shared by most previous algorithms. It presents a unified perspective to understand these algorithms; that is, each algorithm can be considered as the direct graph embedding or its linear/kernel extension of some specific graph characterizing certain statistic or geometry property of a data set. Furthermore, this framework is a general platform to develop new algorithm for dimensionality reduction. To this end, we propose a new supervised algorithm, Marginal Fisher Analysis (MFA), for dimensionality reduction by designing two graphs that characterize the intraclass compactness and interclass separability, respectively. MFA measures the intraclass compactness with the distance between each data point and its neighboring points of the same class, and measures the interclass separability with the class margins; thus it overcomes the limitations of traditional Linear Discriminant Analysis algorithm in terms of data distribution assumptions and available projection directions. The toy problem on artificial data and the real face recognition experiments both show the superiority of our proposed MFA in comparison to LDA. 1.
Differential structure in nonlinear image embedding functions. In: Articulated and Nonrigid Motion
 In Articulated and Nonrigid Motion
, 2004
"... Many natural image sets are samples of a low dimensional manifold in the space of all possible images. When the image data set is not a linear combination of a small number of basis images, then linear dimensionality reduction techniques such as PCA and ICA fail, and nonlinear dimensionality reduct ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Many natural image sets are samples of a low dimensional manifold in the space of all possible images. When the image data set is not a linear combination of a small number of basis images, then linear dimensionality reduction techniques such as PCA and ICA fail, and nonlinear dimensionality reduction techniques are required to automatically determine the intrinsic structure of the image set. Recent techniques such as ISOMAP and LLE provide a mapping between the images and a low dimensional parameterization of the images. In this paper we consider how choosing different image distance metrics affects the lowdimensional parameterization. For image sets that arise from nonrigid and human motion analysis, and MRI applications, differential motions in some directions of the lowdimensional space correspond to common transformations in the image domain. Defining distance measures that are invariant to these transformations makes Isomap a powerful tool for automatic registration of large image or video data sets. 1.
Analysis of an alignment algorithm for nonlinear manifold learning, BIT
"... Abstract. The goal of dimensionality reduction or manifold learning for a given set of highdimensional data points is to find a lowdimensional parametrization for them. Usually it is easy to carry out this parametrization process within a small region to produce a collection of local coordinate sys ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. The goal of dimensionality reduction or manifold learning for a given set of highdimensional data points is to find a lowdimensional parametrization for them. Usually it is easy to carry out this parametrization process within a small region to produce a collection of local coordinate systems. Alignment is the process to stitch those local systems together to produce a global coordinate system and is done through the computation of a partial eigendecomposition of a socalled alignment matrix. In this paper, we present an analysis of the alignment process giving conditions under which the null space of the alignment matrix recovers the global coordinate system up to an affine transformation. We also propose a postprocessing step that can determine the global coordinate system up to a rigid motion. This in turn shows that Local Tangent Space Alignment method (LTSA) can recover locally isometric embedding up to a rigid motion. 1. Introduction. An
Unsupervised Dimensionality Reduction: Overview and Recent Advances
, 2010
"... Unsupervised dimensionality reduction aims at representing highdimensional data in lowerdimensional spaces in a faithful way. Dimensionality reduction can be used for compression or denoising purposes, but data visualization remains one its most prominent applications. This paper attempts to give ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Unsupervised dimensionality reduction aims at representing highdimensional data in lowerdimensional spaces in a faithful way. Dimensionality reduction can be used for compression or denoising purposes, but data visualization remains one its most prominent applications. This paper attempts to give a broad overview of the domain. Past develoments are briefly introduced and pinned up on the time line of the last eleven decades. Next, the principles and techniques involved in the major methods are described. A taxonomy of the methods is suggested, taking into account various properties. Finally, the issue of quality assessment is briefly dealt with.
MANY
"... Abstract—Over the past few decades, dimensionality reduction has been widely exploited in computer vision and pattern analysis. This paper proposes a simple but effective nonlinear dimensionality reduction algorithm, named Maximal Linear Embedding (MLE). MLE learns a parametric mapping to recover a ..."
Abstract
 Add to MetaCart
Abstract—Over the past few decades, dimensionality reduction has been widely exploited in computer vision and pattern analysis. This paper proposes a simple but effective nonlinear dimensionality reduction algorithm, named Maximal Linear Embedding (MLE). MLE learns a parametric mapping to recover a single global lowdimensional coordinate space and yields an isometric embedding for the manifold. Inspired by geometric intuition, we introduce a reasonable definition of locally linear patch, Maximal Linear Patch (MLP), which seeks to maximize the local neighborhood in which linearity holds. The input data are first decomposed into a collection of local linear models, each depicting an MLP. These local models are then aligned into a global coordinate space, which is achieved by applying MDS to some randomly selected landmarks. The proposed alignment method, called Landmarksbased Global Alignment (LGA), can efficiently produce a closedform solution with no risk of local optima. It just involves some smallscale eigenvalue problems, while most previous aligning techniques employ timeconsuming iterative optimization. Compared with traditional methods such as ISOMAP and LLE, our MLE yields an explicit modeling of the intrinsic variation modes of the observation data. Extensive experiments on both synthetic and real data indicate the effectivity and efficiency of the proposed algorithm. Index Terms—Dimensionality reduction, manifold learning, maximal linear patch, landmarksbased global alignment. Ç