Results 1  10
of
17
Kernel discriminant analysis for positive definite and indefinite kernels
, 2008
"... Abstract—Kernel methods are a class of well established and successful algorithms for pattern analysis due to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the socalled kernel trick. The objective ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Kernel methods are a class of well established and successful algorithms for pattern analysis due to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the socalled kernel trick. The objective of this paper is twofold. First, we derive an additional kernel tool that is still missing, namely kernel quadratic discriminant (KQD). We discuss different formulations of KQD based on the regularized kernel Mahalanobis distance in both complete and classrelated subspaces. Second, we propose suitable extensions of kernel linear and quadratic discriminants to indefinite kernels. We provide classifiers that are applicable to kernels defined by any symmetric similarity measure. This is important in practice because problemsuited proximity measures often violate the requirement of positive definiteness. As in the traditional case, KQD can be advantageous for data with unequal class spreads in the kernelinduced spaces, which cannot be well separated by a linear discriminant. We illustrate this on artificial and real data for both positive definite and indefinite kernels. Index Terms—Machine learning, pattern recognition, kernel methods, indefinite kernels, discriminant analysis. Ç 1
1 Rotation Invariant Kernels and Their Application to Shape Analysis
"... Shape analysis requires invariance under translation, scale and rotation. Translation and scale invariance can be realized by normalizing shape vectors with respect to their mean and norm. This maps the shape feature vectors onto the surface of a hypersphere. After normalization, the shape vectors c ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Shape analysis requires invariance under translation, scale and rotation. Translation and scale invariance can be realized by normalizing shape vectors with respect to their mean and norm. This maps the shape feature vectors onto the surface of a hypersphere. After normalization, the shape vectors can be made rotational invariant by modelling the resulting data using complex scalar rotation invariant distributions defined on the complex hypersphere, e.g., using the complex Bingham distribution. However, the use of these distributions is hampered by the difficulty in estimating their parameters and the nonlinear nature of their formulation. In the present paper, we show how a set of kernel functions, that we refer to as rotation invariant kernels, can be used to convert the original nonlinear problem into a linear one. As their name implies, these kernels are defined to provide the much needed rotation invariance property allowing one to bypass the difficulty of working with complex spherical distributions. The resulting approach provides an easy, fast mechanism for 2D & 3D shape analysis. Extensive validation using a variety of shape modelling and classification problems demonstrates the accuracy of this proposed approach.
Indefinite Kernel Fisher Discriminant
"... Indefinite kernels arise in practice, e.g. from problemspecific kernel construction. Therefore, it is necessary to understand the behavior and suitability of classifiers in the corresponding indefinite inner product spaces. In this paper we address the Indefinite Kernel Fisher Discriminant (IKFD). ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Indefinite kernels arise in practice, e.g. from problemspecific kernel construction. Therefore, it is necessary to understand the behavior and suitability of classifiers in the corresponding indefinite inner product spaces. In this paper we address the Indefinite Kernel Fisher Discriminant (IKFD). First, we give the geometric interpretation of the Fisher Discriminant in indefinite inner product spaces. We show that IKFD is closely related to the wellknown formulation of the traditional Kernel Fisher Discriminant derived for positive definite kernels. Practical implications are that IKFD can be directly applied to indefinite kernels without manipulation of the kernel matrix. Experiments demonstrate the geometrically intuitive classification and enable comparisons to other indefinite kernel classifiers. 1.
On refining dissimilarity matrices for an improved NN learning
 In 19th International Conference on Pattern Recognition ICPR 2008
, 2008
"... Applicationspecific dissimilarity functions can be used for learning from a set of objects represented by pairwise dissimilarity matrices in this context. These dissimilarities may, however, suffer from various defects, e.g. when derived from a suboptimal optimization or by the use of nonmetric or ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Applicationspecific dissimilarity functions can be used for learning from a set of objects represented by pairwise dissimilarity matrices in this context. These dissimilarities may, however, suffer from various defects, e.g. when derived from a suboptimal optimization or by the use of nonmetric or noisy measures. In this paper, we study procedures for refining such dissimilarities. These methods work in a representation space, either a dissimilarity space or a pseudoEuclidean embedded space. On a series of experiments we show that refining may significantly improve the nearest neighbor classifications of dissimilarity measurements. 1.
A Framework for Shape Analysis via Hilbert Space Embedding
 In ICCV
, 2013
"... We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall’s shape manifold. Different representations of 2D shapes are known to generate different nonlinear spaces. Due to the nonlinearity of these spaces, most existing shape classification algorithms resort to nearest neighbor methods and to learning distances on shape spaces. Here, we propose to map shapes on Kendall’s shape manifold to a high dimensional Hilbert space where Euclidean geometry applies. To this end, we introduce a kernel on this manifold that permits such a mapping, and prove its positive definiteness. This kernel lets us extend kernelbased algorithms developed for Euclidean spaces, such as SVM, MKL and kernel PCA, to the shape manifold. We demonstrate the benefits of our approach over the stateoftheart methods on shape classification, clustering and retrieval. 1.
Learning Equivariant Structured Output SVM Regressors
"... Equivariance and invariance are often desired properties of a computer vision system. However, currently available strategies generally rely on virtual sampling, leaving open the question of how many samples are necessary, on the use of invariant feature representations, which can mistakenly discard ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Equivariance and invariance are often desired properties of a computer vision system. However, currently available strategies generally rely on virtual sampling, leaving open the question of how many samples are necessary, on the use of invariant feature representations, which can mistakenly discard information relevant to the vision task, or on the use of latent variable models, which result in nonconvex training and expensive inference at test time. We propose here a generalization of structured output SVM regressors that can incorporate equivariance and invariance into a convex training procedure, enabling the incorporation of large families of transformations, while maintaining optimality and tractability. Importantly, test time inference does not require the estimation of latent variables, resulting in highly efficient objective functions. This results in a natural formulation for treating equivariance and invariance that is easily implemented as an adaptation of offtheshelf optimization software, obviating the need for ad hoc sampling strategies. Theoretical results relating to vicinal risk, and experiments on challenging aerial car and pedestrian detection tasks show the effectiveness of the proposed solution. 1.
The dissimilarity representation for structural pattern recognition
 In CIARP, (LNCS), volume 7042
, 2011
"... Abstract. The patterns in collections of real world objects are often not based on a limited set of isolated properties such as features. Instead, the totality of their appearance constitutes the basis of the human recognition of patterns. Structural pattern recognition aims to find explicit proced ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The patterns in collections of real world objects are often not based on a limited set of isolated properties such as features. Instead, the totality of their appearance constitutes the basis of the human recognition of patterns. Structural pattern recognition aims to find explicit procedures that mimic the learning and classification made by human experts in welldefined and restricted areas of application. This is often done by defining dissimilarity measures between objects and measuring them between training examples and new objects to be recognized. The dissimilarity representation offers the possibility to apply the tools developed in machine learning and statistical pattern recognition to learn from structural object representations such as graphs and strings. These procedures are also applicable to the recognition of histograms, spectra, images and time sequences taking into account the connectivity of samples (bins, wavelengths, pixels or time samples). The topic of dissimilarity representation is related to the field of nonMercer kernels in machine learning but it covers a wider set of classifiers and applications. Recently much progress has been made in this area and many interesting applications have been studied in medical diagnosis, seismic and hyperspectral imaging, chemometrics and computer vision. This review paper offers an introduction to this field and presents a number of real world applications1. 1
The path kernel
 In Proc. International Conference on Pattern Recognition Applications and Methods
, 2013
"... Abstract: Kernel methods have been used very successfully to classify data in various application domains. Traditionally, kernels have been constructed mainly for vectorial data defined on a specific vector space. Much less work has been addressing the development of kernel functions for nonvectori ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract: Kernel methods have been used very successfully to classify data in various application domains. Traditionally, kernels have been constructed mainly for vectorial data defined on a specific vector space. Much less work has been addressing the development of kernel functions for nonvectorial data. In this paper, we present a new kernel for encoding sequential data. We present our results comparing the proposed kernel to the state of the art, showing a significant improvement in classification and a much improved robustness and interpretability. 1
Dimension and Margin Bounds for Reflectioninvariant Kernels ∗
"... A kernel over the Boolean domain is said to be reflectioninvariant, if its value does not change when we flip the same bit in both arguments. (Many popular kernels have this property.) We study the geometric margins that can be achieved when we represent a specific Boolean function f by a classifie ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
A kernel over the Boolean domain is said to be reflectioninvariant, if its value does not change when we flip the same bit in both arguments. (Many popular kernels have this property.) We study the geometric margins that can be achieved when we represent a specific Boolean function f by a classifier that employs a reflectioninvariant kernel. It turns out ‖ ˆ f‖ ∞ is an upper bound on the average margin. Furthermore, ‖ ˆ f‖−1 ∞ is a lower bound on the smallest dimension of a feature space associated with a reflectioninvariant kernel that allows for a correct representation of f. This is, to the best of our knowledge, the first paper that exhibits margin and dimension bounds for specific functions (as opposed to function families). Several generalizations are considered as well. The main mathematical results are presented in a setting with arbitrary finite domains and a quite general notion of invariance. 1
SVM in Kreĭn spaces
, 2013
"... Support vector machines (SVM) and kernel methods have been highly successful in many application areas. However, the requirement that the kernel is symmetric positive semidefinite, Mercer’s condition, is not always verified in practice. When it is not, the kernel is called indefinite. Various heuris ..."
Abstract
 Add to MetaCart
(Show Context)
Support vector machines (SVM) and kernel methods have been highly successful in many application areas. However, the requirement that the kernel is symmetric positive semidefinite, Mercer’s condition, is not always verified in practice. When it is not, the kernel is called indefinite. Various heuristics and specialized methods have been proposed to address indefinite kernels, from simple tricks such as removing negative eigenvalues, to advanced methods that denoise the kernel by considering the negative part of the kernel as noise. Most approaches aim at correcting an indefinite kernel in order to provide a positive one. We propose a new SVM approach that deals directly with indefinite kernels. In contrast to previous approaches, we embrace the underlying idea that the negative part of an indefinite kernel may contain valuable information. To define such a method, the SVM formulation has to be adapted to a non usual form: the stabilization. The hypothesis space, usually a Hilbert space, becomes a Kreĭn space. This work explores this new formulation, and proposes two practical algorithms (ESVM and KSVM) that outperform the approaches that modify the kernel. Moreover, the solution depends on the original kernel and thus can be used on any new point without loss of accuracy.