Results 1  10
of
32
Learning EuclideantoRiemannian Metric for PointtoSet Classification
"... In this paper, we focus on the problem of pointtoset classification, where single points are matched against sets of correlated points. Since the points commonly lie in Euclidean space while the sets are typically modeled as elements on Riemannian manifold, they can be treated as Euclidean poin ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
In this paper, we focus on the problem of pointtoset classification, where single points are matched against sets of correlated points. Since the points commonly lie in Euclidean space while the sets are typically modeled as elements on Riemannian manifold, they can be treated as Euclidean points and Riemannian points respectively. To learn a metric between the heterogeneous points, we propose a novel EuclideantoRiemannian metric learning framework. Specifically, by exploiting typical Riemannian metrics, the Riemannian manifold is first embedded into a high dimensional Hilbert space to reduce the gaps between the heterogeneous spaces and meanwhile respect the Riemannian geometry of the manifold. The final distance metric is then learned by pursuing multiple transformations from the Hilbert space and the original Euclidean space (or its corresponding Hilbert space) to a common Euclidean subspace, where classical Euclidean distances of transformed heterogeneous points can be measured. Extensive experiments clearly demonstrate the superiority of our proposed approach over the stateoftheart methods. 1.
From Manifold to Manifold: GeometryAware Dimensionality Reduction for SPD Matrices
"... of any given curve under the geodesic distance δg and the Stein metric δS up to scale of 2 √ 2. The proof of this theorem follows several steps. We start with the definition of curve length and intrinsic metric. Without any assumption on differentiability, let (M, d) be a metric space. A curve in M ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
of any given curve under the geodesic distance δg and the Stein metric δS up to scale of 2 √ 2. The proof of this theorem follows several steps. We start with the definition of curve length and intrinsic metric. Without any assumption on differentiability, let (M, d) be a metric space. A curve in M is a continuous function γ: [0, 1] → M and joins the starting point γ(0) = x to the end point γ(1) = y. Definition 1. The length of a curve γ is the supremum of l(γ; {ti}) over all possible partitions {ti}, where 0 = t0 < t1 < · · · < tn−1 < tn = 1 and l(γ; {ti}) = ∑ i d (γ(ti), γ(ti−1)). Definition 2. The intrinsic metric ̂ δ(x, y) on M is defined as the infimum of the lengths of all paths from x to y. Theorem 1 ( [2]). If the intrinsic metrics induced by two metrics d1 and d2 are identical up to a scale ξ, then the length of any given curve is the same under both metrics up to ξ. Theorem 2 ( [2]). If d1(x, y) and d2(x, y) are two metrics defined on a space M such that d2(x, y) lim = 1. (1) d1(x,y)→0 d1(x, y) uniformly (with respect to x and y), then their intrinsic metrics are identical. Therefore, here, we need to study the behavior of lim δ 2 S (X,Y)→0 δ 2 g(X, Y) δ2 S
Covariance Descriptors for 3D Shape Matching and Retrieval
"... Several descriptors have been proposed in the past for 3D shape analysis, yet none of them achieves best performance on all shape classes. In this paper we propose a novel method for 3D shape analysis using the covariance matrices of the descriptors rather than the descriptors themselves. Covarian ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Several descriptors have been proposed in the past for 3D shape analysis, yet none of them achieves best performance on all shape classes. In this paper we propose a novel method for 3D shape analysis using the covariance matrices of the descriptors rather than the descriptors themselves. Covariance matrices enable efficient fusion of different types of features and modalities. They capture, using the same representation, not only the geometric and the spatial properties of a shape region but also the correlation of these properties within the region. Covariance matrices, however, lie on the manifold of Symmetric Positive Definite (SPD) tensors, a special type of Riemannian manifolds, which makes comparison and clustering of such matrices challenging. In this paper we study covariance matrices in their native space and make use of geodesic distances on the manifold as a dissimilarity measure. We demonstrate the performance of this metric on 3D face matching and recognition tasks. We then generalize the Bag of Features paradigm, originally designed in Euclidean spaces, to the Riemannian manifold of SPD matrices. We propose a new clustering procedure that takes into account the geometry of the Riemannian manifold. We evaluate the performance of the proposed Bag of Covariance Matrices framework on 3D shape matching and retrieval applications and demonstrate its superiority compared to descriptorbased techniques. 1.
Riemannian Sparse Coding for Positive Definite Matrices
, 2014
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A Framework for Shape Analysis via Hilbert Space Embedding
 In ICCV
, 2013
"... We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort to nearest neighbor methods and to learning distances on shape spaces. Here, we propose to map shapes on Kendall’s shape manifold to a high dimensional Hilbert space where Euclidean geometry applies. To this end, we introduce a kernel on this manifold that permits such a mapping, and prove its positive definiteness. This kernel lets us extend kernelbased algorithms developed for Euclidean spaces, such as SVM, MKL and kernel PCA, to the shape manifold. We demonstrate the benefits of our approach over the stateoftheart methods on shape classification, clustering and retrieval. 1.
Logeuclidean kernels for sparse representation and dictionary learning
 In Proc. Int. Conference on Computer Vision (ICCV
, 2013
"... The symmetric positive definite (SPD) matrices have been widely used in image and vision problems. Recently there are growing interests in studying sparse representation (SR) of SPD matrices, motivated by the great success of SR for vector data. Though the space of SPD matrices is wellknown to for ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
The symmetric positive definite (SPD) matrices have been widely used in image and vision problems. Recently there are growing interests in studying sparse representation (SR) of SPD matrices, motivated by the great success of SR for vector data. Though the space of SPD matrices is wellknown to form a Lie group that is a Riemannian manifold, existing work fails to take full advantage of its geometric structure. This paper attempts to tackle this problem by proposing a kernel based method for SR and dictionary learning (DL) of SPD matrices. We disclose that the space of SPD matrices, with the operations of logarithmic multiplication and scalar logarithmic multiplication defined in the LogEuclidean framework, is a complete inner product space. We can thus develop a broad family of kernels that satisfies Mercer’s condition. These kernels characterize the geodesic distance and can be computed efficiently. We also consider the geometric structure in the DL process by updating atom matrices in the Riemannian space instead of in the Euclidean space. The proposed method is evaluated with various vision problems and shows notable performance gains over stateofthearts. 1.
Bregman Divergences for Infinite Dimensional Covariance Matrices
"... We introduce an approach to computing and comparing Covariance Descriptors (CovDs) in infinitedimensional spaces. CovDs have become increasingly popular to address classification problems in computer vision. While CovDs offer some robustness to measurement variations, they also throw away part of t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We introduce an approach to computing and comparing Covariance Descriptors (CovDs) in infinitedimensional spaces. CovDs have become increasingly popular to address classification problems in computer vision. While CovDs offer some robustness to measurement variations, they also throw away part of the information contained in the original data by only retaining the secondorder statistics over the measurements. Here, we propose to overcome this limitation by first mapping the original data to a highdimensional Hilbert space, and only then compute the CovDs. We show that several Bregman divergences can be computed between the resulting CovDs in Hilbert space via the use of kernels. We then exploit these divergences for classification purpose. Our experiments demonstrate the benefits of our approach on several tasks, such as material and texture recognition, person reidentification, and action recognition from motion capture data. 1.
Sparse Coding on Symmetric Positive Definite Manifolds using Bregman Divergences
"... Abstract—This paper introduces sparse coding and dictionary learning for Symmetric Positive Definite (SPD) matrices, which are often used in machine learning, computer vision and related areas. Unlike traditional sparse coding schemes that work in vector spaces, in this paper we discuss how SPD matr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces sparse coding and dictionary learning for Symmetric Positive Definite (SPD) matrices, which are often used in machine learning, computer vision and related areas. Unlike traditional sparse coding schemes that work in vector spaces, in this paper we discuss how SPD matrices can be described by sparse combination of dictionary atoms, where the atoms are also SPD matrices. We propose to seek sparse coding by embedding the space of SPD matrices into Hilbert spaces through two types of Bregman matrix divergences. This not only leads to an efficient way of performing sparse coding, but also an online and iterative scheme for dictionary learning. We apply the proposed methods to several computer vision tasks where images are represented by region covariance matrices. Our proposed algorithms outperform stateoftheart methods on a wide range of classification tasks, including face recognition, action recognition, material classification and texture categorization. Index Terms—Riemannian geometry, Bregman divergences, kernel methods, sparse coding, dictionary learning. I.
Hybrid EuclideanandRiemannian Metric Learning for Image Set Classification
"... Abstract. We propose a novel hybrid metric learning approach to combine multiple heterogenous statistics for robust image set classification. Specifically, we represent each set with multiple statistics – mean, covariance matrix and Gaussian distribution, which generally complement each other for ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a novel hybrid metric learning approach to combine multiple heterogenous statistics for robust image set classification. Specifically, we represent each set with multiple statistics – mean, covariance matrix and Gaussian distribution, which generally complement each other for set modeling. However, it is not trivial to fuse them since the mean vector with ddimension often lies in Euclidean space Rd, whereas the covariance matrix typically resides on Riemannian manifold Sym+d. Besides, according to information geometry, the space of Gaussian distribution can be embedded into another Riemannian manifold Sym+d+1. To fuse these statistics from heterogeneous spaces, we propose a Hybrid EuclideanandRiemannian Metric Learning (HERML) method to exploit both Euclidean and Riemannian metrics for embedding their original spaces into high dimensional Hilbert spaces and then jointly learn hybrid metrics with discriminant constraint. The proposed method is evaluated on two tasks: setbased object categorization and videobased face recognition. Extensive experimental results demonstrate that our method has a clear superiority over the stateoftheart methods. 1
Domain Adaptation on the Statistical Manifold
"... In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions become similar. To ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions become similar. To compare the two distributions, existing approaches make use of the Maximum Mean Discrepancy (MMD). However, this does not exploit the fact that probability distributions lie on a Riemannian manifold. Here, we propose to make better use of the structure of this manifold and rely on the distance on the manifold to compare the source and target distributions. In this framework, we introduce a sample selection method and a subspacebased method for unsupervised domain adaptation, and show that both these manifoldbased techniques outperform the corresponding approaches based on the MMD. Furthermore, we show that our subspacebased approach yields stateoftheart results on a standard object recognition benchmark. 1.