Results 11  20
of
306
Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis
 Journal of Machine Learning Research
, 2007
"... Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in highdimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a ..."
Abstract

Cited by 56 (10 self)
 Add to MetaCart
Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in highdimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called localitypreserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to nonlinear dimensionality reduction scenarios by applying the kernel trick.
Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization
 in Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics
, 2005
"... We describe an algorithm for nonlinear dimensionality reduction based on semidefinite programming and kernel matrix factorization. The algorithm learns a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. In earlier work, the kernel matrix was learned by maximiz ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
(Show Context)
We describe an algorithm for nonlinear dimensionality reduction based on semidefinite programming and kernel matrix factorization. The algorithm learns a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. In earlier work, the kernel matrix was learned by maximizing the variance in feature space while preserving the distances and angles between nearest neighbors. In this paper, adapting recent ideas from semisupervised learning on graphs, we show that the full kernel matrix can be very well approximated by a product of smaller matrices. Representing the kernel matrix in this way, we can reformulate the semidefinite program in terms of a much smaller submatrix of inner products between randomly chosen landmarks. The new framework leads to orderofmagnitude reductions in computation time and makes it possible to study much larger problems in manifold learning. 1
A Bayesian method for probable surface reconstruction and decimation
 ACM TRANS. GRAPH
, 2006
"... We present a Bayesian technique for the reconstruction and subsequent decimation of 3D surface models from noisy sensor data. The method uses oriented probabilistic models of the measurement noise, and combines them with featureenhancing prior probabilities over 3D surfaces. When applied to surface ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
We present a Bayesian technique for the reconstruction and subsequent decimation of 3D surface models from noisy sensor data. The method uses oriented probabilistic models of the measurement noise, and combines them with featureenhancing prior probabilities over 3D surfaces. When applied to surface reconstruction, the method simultaneously smooths noisy regions while enhancing features, such as corners. When applied to surface decimation, it finds models that closely approximate the original mesh when rendered. The method is applied in the context of computer animation, where it finds decimations that minimize the visual error even under nonrigid deformations.
Analysis and extension of spectral methods for nonlinear dimensionality reduction
 In Proceedings of the Twenty Second International Conference on Machine Learning (ICML05
, 2005
"... Many unsupervised algorithms for nonlinear dimensionality reduction, such as locally linear embedding (LLE) and Laplacian eigenmaps, are derived from the spectral decompositions of sparse matrices. While these algorithms aim to preserve certain proximity relations on average, their embeddings are no ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
(Show Context)
Many unsupervised algorithms for nonlinear dimensionality reduction, such as locally linear embedding (LLE) and Laplacian eigenmaps, are derived from the spectral decompositions of sparse matrices. While these algorithms aim to preserve certain proximity relations on average, their embeddings are not explicitly designed to preserve local features such as distances or angles. In this paper, we show how to construct a low dimensional embedding that maximally preserves angles between nearby data points. The embedding is derived from the bottom eigenvectors of LLE and/or Laplacian eigenmaps by solving an additional (but small) problem in semidefinite programming, whose size is independent of the number of data points. The solution obtained by semidefinite programming also yields an estimate of the data’s intrinsic dimensionality. Experimental results on several data sets demonstrate the merits of our approach. 1.
An efficient algorithm for local distance metric learning
 in Proceedings of AAAI
, 2006
"... Learning applicationspecific distance metrics from labeled data is critical for both statistical classification and information retrieval. Most of the earlier work in this area has focused on finding metrics that simultaneously optimize compactness and separability in a global sense. Specifically, ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
Learning applicationspecific distance metrics from labeled data is critical for both statistical classification and information retrieval. Most of the earlier work in this area has focused on finding metrics that simultaneously optimize compactness and separability in a global sense. Specifically, such distance metrics attempt to keep all of the data points in each class close together while ensuring that data points from different classes are separated. However, particularly when classes exhibit multimodal data distributions, these goals conflict and thus cannot be simultaneously satisfied. This paper proposes a Local Distance Metric (LDM) that aims to optimize local compactness and local separability. We present an efficient algorithm that employs eigenvector analysis and bound optimization to learn the LDM from training data in a probabilistic framework. We demonstrate that LDM achieves significant improvements in both classification and retrieval accuracy compared to global distance learning and kernelbased KNN.
Action respecting embedding
 In Proceedings of the TwentySecond International Conference on Machine Learning
, 2005
"... ARE is a nonlinear dimensionality reduction technique for embedding observation trajectories, which captures state dynamics that traditional methods do not. The core of ARE is a semidefinite optimization with constraints requiring actions to be distancepreserving in the resulting embedding. Unfort ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
(Show Context)
ARE is a nonlinear dimensionality reduction technique for embedding observation trajectories, which captures state dynamics that traditional methods do not. The core of ARE is a semidefinite optimization with constraints requiring actions to be distancepreserving in the resulting embedding. Unfortunately, these constraints are quadratic in number and nonlocal (making recent scaling tricks inapplicable). Consequently, the original formulation was limited to relatively small datasets. This paper describes two techniques to mitigate these issues. We first introduce an actionguided variant of Isomap. Although it alone does not produce actionrespecting manifolds, it can be used to seed conjugate gradient to implicitly solve the primal variable formulation of the ARE optimization. The optimization is not convex, but the ActionGuided Isomap provides an excellent seed often very close to the global minimum. The resulting Scalable ARE procedure gives similar results to original ARE, but can be applied to datasets an order of magnitude larger. 1
Segmenting motions of different types by unsupervised manifold clustering
 In Proceedings of CVPR
, 2007
"... We propose a novel algorithm for segmenting multiple motions of different types from point correspondences in multiple affine or perspective views. Since point trajectories associated with different motions live in different manifolds, traditional approaches deal with only one manifold type: linear ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
We propose a novel algorithm for segmenting multiple motions of different types from point correspondences in multiple affine or perspective views. Since point trajectories associated with different motions live in different manifolds, traditional approaches deal with only one manifold type: linear subspaces for affine views, and homographic, bilinear and trilinear varieties for two and three perspective views. As real motion sequences contain motions of different types, we cast motion segmentation as a problem of clustering manifolds of different types. Rather than explicitly modeling each manifold as a linear, bilinear or multilinear variety, we use nonlinear dimensionality reduction to learn a lowdimensional representation of the union of all manifolds. We show that for a union of separated manifolds, the LLE algorithm computes a matrix whose null space contains vectors giving the segmentation of the data. An analysis of the variance of these vectors allows us to distinguish them from other vectors in the null space. This leads to a new algorithm for clustering both linear and nonlinear manifolds. Although this algorithm is theoretically designed for separated manifolds, our experiments demonstrate its performance on real data where this assumption does not hold. We test our algorithm on the Hopkins 155 motion segmentation database and achieve an average classification error of 4.8%, which compares favorably against stateofthe art multiframe motion segmentation methods. 1.
Ask the locals: multiway local pooling for image recognition
"... Invariant representations in object recognition systems are generally obtained by pooling feature vectors over spatially local neighborhoods. But pooling is not local in the feature vector space, so that widely dissimilar features may be pooled together if they are in nearby locations. Recent approa ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Invariant representations in object recognition systems are generally obtained by pooling feature vectors over spatially local neighborhoods. But pooling is not local in the feature vector space, so that widely dissimilar features may be pooled together if they are in nearby locations. Recent approaches rely on sophisticated encoding methods and more specialized codebooks (or dictionaries), e.g., learned on subsets of descriptors which are close in feature space, to circumvent this problem. In this work, we argue that a common trait found in much recent work in image recognition or retrieval is that it leverages locality in feature space on top of purely spatial locality. We propose to apply this idea in its simplest form to an object recognition system based on the spatial pyramid framework, to increase the performance of small dictionaries with very little added engineering. Stateoftheart results on several object recognition benchmarks show the promise of this approach. 1.
Supervised Locally Linear Embedding
, 2003
"... Locally linear embedding (LLE) is a recently proposed method for unsupervised nonlinear dimensionality reduction. It has a number of attractive features: it does not require an iterative algorithm, and just a few parameters need to be set. Two extensions of LLE to supervised feature extraction w ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Locally linear embedding (LLE) is a recently proposed method for unsupervised nonlinear dimensionality reduction. It has a number of attractive features: it does not require an iterative algorithm, and just a few parameters need to be set. Two extensions of LLE to supervised feature extraction were independently proposed by the authors of this paper. Here, both methods are unified in a common framework and applied to a number of benchmark data sets. Results show that they perform very well on highdimensional data which exhibits a manifold structure.