Results 11  20
of
487
Quick shift and kernel methods for mode seeking
 In European Conference on Computer Vision, volume IV
, 2008
"... Abstract. We show that the complexity of the recently introduced medoidshift algorithm in clustering N points is O(N 2), with a small constant, if the underlying distance is Euclidean. This makes medoid shift considerably faster than mean shift, contrarily to what previously believed. We then explo ..."
Abstract

Cited by 96 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We show that the complexity of the recently introduced medoidshift algorithm in clustering N points is O(N 2), with a small constant, if the underlying distance is Euclidean. This makes medoid shift considerably faster than mean shift, contrarily to what previously believed. We then exploit kernel methods to extend both mean shift and the improved medoid shift to a large family of distances, with complexity bounded by the effective rank of the resulting kernel matrix, and with explicit regularization constraints. Finally, we show that, under certain conditions, medoid shift fails to cluster data points belonging to the same mode, resulting in overfragmentation. We propose remedies for this problem, by introducing a novel, simple and extremely efficient clustering algorithm, called quick shift, that explicitly trades off under and overfragmentation. Like medoid shift, quick shift operates in nonEuclidean spaces in a straightforward manner. We also show that the accelerated medoid shift can be used to initialize mean shift for increased efficiency. We illustrate our algorithms to clustering data on manifolds, image segmentation, and the automatic discovery of visual categories. 1
Kernel dependency estimation
 in Advances in NIPS 15
, 2003
"... We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both inpu ..."
Abstract

Cited by 85 (13 self)
 Add to MetaCart
(Show Context)
We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images. 1
Learning distance metrics with contextual constraints for image retrieval
 In Proc. CVPR2006
, 2006
"... Relevant Component Analysis (RCA) has been proposed for learning distance metrics with contextual constraints for image retrieval. However, RCA has two important disadvantages. One is the lack of exploiting negative constraints which can also be informative, and the other is its incapability of capt ..."
Abstract

Cited by 79 (21 self)
 Add to MetaCart
(Show Context)
Relevant Component Analysis (RCA) has been proposed for learning distance metrics with contextual constraints for image retrieval. However, RCA has two important disadvantages. One is the lack of exploiting negative constraints which can also be informative, and the other is its incapability of capturing complex nonlinear relationships between data instances with the contextual information. In this paper, we propose two algorithms to overcome these two disadvantages, i.e., Discriminative Component Analysis (DCA) and Kernel DCA. Compared with other complicated methods for distance metric learning, our algorithms are rather simple to understand and very easy to solve. We evaluate the performance of our algorithms on image retrieval in which experimental results show that our algorithms are effective and promising in learning good quality distance metrics for image retrieval. 1
Locally Linear Discriminant Analysis for Multimodally Distributed Classes for Face Recognition with a Single Model Image
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 2005
"... Abstract—We present a novel method of nonlinear discriminant analysis involving a set of locally linear transformations called “Locally Linear Discriminant Analysis (LLDA). ” The underlying idea is that global nonlinear data structures are locally linear and local structures can be linearly aligned. ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We present a novel method of nonlinear discriminant analysis involving a set of locally linear transformations called “Locally Linear Discriminant Analysis (LLDA). ” The underlying idea is that global nonlinear data structures are locally linear and local structures can be linearly aligned. Input vectors are projected into each local feature space by linear transformations found to yield locally linearly transformed classes that maximize the betweenclass covariance while minimizing the withinclass covariance. In face recognition, linear discriminant analysis (LDA) has been widely adopted owing to its efficiency, but it does not capture nonlinear manifolds of faces which exhibit pose variations. Conventional nonlinear classification methods based on kernels such as generalized discriminant analysis (GDA) and support vector machine (SVM) have been developed to overcome the shortcomings of the linear method, but they have the drawback of high computational cost of classification and overfitting. Our method is for multiclass nonlinear discrimination and it is computationally highly efficient as compared to GDA. The method does not suffer from overfitting by virtue of the linear base structure of the solution. A novel gradientbased learning algorithm is proposed for finding the optimal set of local linear bases. The optimization does not exhibit a localmaxima problem. The transformation functions facilitate robust face recognition in a lowdimensional subspace, under pose variations, using a single model image. The classification results are given for both synthetic and real face data. Index Terms—Linear discriminant analysis, generalized discriminant analysis, support vector machine, dimensionality reduction, face recognition, feature extraction, pose invariance, subspace representation. æ 1
Constructing Descriptive and Discriminative Nonlinear Features: Rayleigh Coefficients in Kernel Feature Spaces
, 2003
"... We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA usi ..."
Abstract

Cited by 74 (5 self)
 Add to MetaCart
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach.
A Kernel Method For Canonical Correlation Analysis
 In Proceedings of the International Meeting of the Psychometric Society (IMPS2001
, 2001
"... roduce a quadratic regularization term #(a + b )/2 into the cost function of CCA. By the quadratic regularization, it follows that a can be written by a weighted sum of # x (x i ) where x i is the ith sample, and b can be written by a weighted sum of # y (y i ). Therefore, a # x (x) = # ..."
Abstract

Cited by 71 (0 self)
 Add to MetaCart
roduce a quadratic regularization term #(a + b )/2 into the cost function of CCA. By the quadratic regularization, it follows that a can be written by a weighted sum of # x (x i ) where x i is the ith sample, and b can be written by a weighted sum of # y (y i ). Therefore, a # x (x) = # i # i # x (x i ) # x (x). This fact enables us to use a "kernel trick": Let k(z, w) be a kernel function which is symmetric and positive definite, then there exists # z (z) and k(z, w) = # z (z) # z (w). Using a kernel, we can calculate # x (x i ) # x (x) directly without knowing #. This means the complexity problem of calculation is solved as well, because we do not need to calculate # anymore. 6. Consequently, we obtain KCCA: 1. Calculate matrices of kernels K x = (k x (x i , x j )) and K y = (k y (y i , y j )), where k x and k y are kernels. 2. Solve the generalized eigen problem, M# = #L# and M # = #N#, where M = (1/n)K x JK y , L = (1/n)K x JK x + (#/#)K x , N = (1/n)K y
A Mathematical Programming Approach to the Kernel Fisher Algorithm
, 2001
"... We investigate a new kernelbased classifier: the Kernel Fisher Discriminant (KFD). A mathematical programming formulation based on the observation that KFD maximizes the average margin permits an interesting modification of the original KFD algorithm yielding the sparse KFD. We find that both, KFD ..."
Abstract

Cited by 70 (14 self)
 Add to MetaCart
(Show Context)
We investigate a new kernelbased classifier: the Kernel Fisher Discriminant (KFD). A mathematical programming formulation based on the observation that KFD maximizes the average margin permits an interesting modification of the original KFD algorithm yielding the sparse KFD. We find that both, KFD and the proposed sparse KFD, can be understood in an unifying probabilistic context. Furthermore, we show connections to Support Vector Machines and Relevance Vector Machines. From this understanding, we are able to outline an interesting kernelregression technique based upon the KFD algorithm. Simulations support the usefulness of our approach.
Face Recognition Using Kernel Eigenfaces
, 2000
"... Eigenceface or Principal Component Analysis (PCA) methods have demonstrated their success in face recognition, detection, and tracking. The representation in PCA is based on the second order statistics of the image set, and does not address higher order statistical dependencies such as the relations ..."
Abstract

Cited by 60 (0 self)
 Add to MetaCart
Eigenceface or Principal Component Analysis (PCA) methods have demonstrated their success in face recognition, detection, and tracking. The representation in PCA is based on the second order statistics of the image set, and does not address higher order statistical dependencies such as the relationships among three or more pixels. Recently Higher Order Statistics (HOS) have been used as a more informative low dimensional representation than PCA for face and vehicle detection. In this paper we investigate a generalization of PCA, Kernel Principal Component Analysis (Kernel PCA), for learning low dimensional representations in the context of face recognition. In contrast to HOS, Kernel PCA computes the higher order statistics without the combinatorial explosion of time and memory complexity. While PCA aims to nd a second order correlation of patterns, Kernel PCA provides a replacement which takes into account higher order correlations. We compare the recognition results using kernel met...
Iterative kernel principal component analysis for image modeling
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... Abstract In recent years, Kernel Principal Component Analysis (KPCA) has been suggested for various image processing tasks requiring an image model such as, e.g., denoising or compression. The original form of KPCA, however, can be only applied to strongly restricted image classes due to the limite ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
(Show Context)
Abstract In recent years, Kernel Principal Component Analysis (KPCA) has been suggested for various image processing tasks requiring an image model such as, e.g., denoising or compression. The original form of KPCA, however, can be only applied to strongly restricted image classes due to the limited number of training examples that can be processed. We therefore propose a new iterative method for performing KPCA, the Kernel Hebbian Algorithm which iteratively estimates the Kernel Principal Components with only linear order memory complexity. In our experiments, we compute models for complex image classes such as faces and natural images which require a large number of training examples. The resulting image models are tested in singleframe superresolution and denoising applications. The KPCA model is not specifically tailored to these tasks; in fact, the same model can be used in superresolution with variable input resolution, or denoising with unknown noise characteristics. In spite of this, both superresolution and denoising performance are comparable to existing methods.
Invariant Feature Extraction and Classification in Kernel Spaces
"... We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinear variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA us ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinear variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA using Support Vector kernel functions.