Results 11  20
of
200,549
Modelbased transductive learning of the kernel matrix
 Machine Learning
, 2006
"... This paper addresses the problem of transductive learning of the kernel matrix from a probabilistic perspective. We define the kernel matrix as a Wishart process prior and construct a hierarchical generative model for kernel matrix learning. Specifically, we consider the target kernel matrix as a r ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
This paper addresses the problem of transductive learning of the kernel matrix from a probabilistic perspective. We define the kernel matrix as a Wishart process prior and construct a hierarchical generative model for kernel matrix learning. Specifically, we consider the target kernel matrix as a
Blockquantized kernel matrix for fast spectral embedding
 In Proceedings of International Conference on Machine learning (ICML’06
, 2006
"... Eigendecomposition of kernel matrix is an indispensable procedure in many learning and vision tasks. However, the cubic complexity O(N 3) is impractical for large problem, where N is the data size. In this paper, we propose an efficient approach to solve the eigendecomposition of the kernel matrix W ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Eigendecomposition of kernel matrix is an indispensable procedure in many learning and vision tasks. However, the cubic complexity O(N 3) is impractical for large problem, where N is the data size. In this paper, we propose an efficient approach to solve the eigendecomposition of the kernel matrix
Accurate Error Bounds for the Eigenvalues of the Kernel Matrix
 JOURNAL OF MACHINE LEARNING RESEARCH 7 (2006) 23032328
, 2006
"... The eigenvalues of the kernel matrix play an important role in a number of kernel methods, in particular, in kernel principal component analysis. It is well known that the eigenvalues of the kernel matrix converge as the number of samples tends to infinity. We derive probabilistic finite sample s ..."
Abstract
 Add to MetaCart
The eigenvalues of the kernel matrix play an important role in a number of kernel methods, in particular, in kernel principal component analysis. It is well known that the eigenvalues of the kernel matrix converge as the number of samples tends to infinity. We derive probabilistic finite sample
Learning the kernel matrix for XML document clustering
 In Proceedings of the IEEE International Conference on eTechnology, eCommerce and eService (EEE
, 2005
"... The rapid growth of XML adoption has urged for the need of a proper representation for semistructured documents, where the document structural information has to be taken into account so as to support more precise document analysis. In this paper, an XML document representation named “structured li ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
link vector model ” is adopted, with a kernel matrix included for modeling the similarity between XML elements. Our formulation allows individual XML elements to have their own weighted contribution to the overall document similarity while at the same time allows the betweenelement similarity
Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization
 in Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics
, 2005
"... We describe an algorithm for nonlinear dimensionality reduction based on semidefinite programming and kernel matrix factorization. The algorithm learns a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. In earlier work, the kernel matrix was learned by maximiz ..."
Abstract

Cited by 66 (5 self)
 Add to MetaCart
We describe an algorithm for nonlinear dimensionality reduction based on semidefinite programming and kernel matrix factorization. The algorithm learns a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. In earlier work, the kernel matrix was learned
Sharp analysis of lowrank kernel matrix approximations
 JMLR: WORKSHOP AND CONFERENCE PROCEEDINGS VOL 30 (2013) 1–25
, 2013
"... We consider supervised learning problems within the positivedefinite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinitedimensional feature spaces, a common practical limiting difficulty is the necessity of c ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
of computing the kernel matrix, which most frequently leads to algorithms with running time at least quadratic in the number of observations n, i.e., O(n 2). Lowrank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to O(p 2 n), where p
A nonparametric Bayesian model for kernel matrix completion
 in ICASSP 2010
"... We present a nonparametric Bayesian model for completing lowrank, positive semidefinite matrices. Given an N × N matrix with underlying rank r, and noisy measured values and missing values with a symmetric pattern, the proposed Bayesian hierarchical model nonparametrically uncovers the underlying r ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
selection procedure. We present results on a toy problem, and a music recommendation problem, where we complete the kernel matrix of 2,250 pieces of music. Index Terms — kernel matrix completion, Bayesian nonparametrics, music recommendation 1.
Probabilistic Kernel Matrix Learning with a Mixture Model of Kernels
"... This paper addresses the kernel matrix learning problem in kernel methods. We model the kernel matrix as a random positive definite matrix following the Wishart distribution, with the parameter matrix of the Wishart distribution represented as a linear combination of mutually independent matrices ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper addresses the kernel matrix learning problem in kernel methods. We model the kernel matrix as a random positive definite matrix following the Wishart distribution, with the parameter matrix of the Wishart distribution represented as a linear combination of mutually independent matrices
SEMISUPERVISED METRIC LEARNING BY KERNEL MATRIX ADAPTATION
"... Many supervised and unsupervised learning algorithms depend on the choice of an appropriate distance metric. While metric learning for supervised learning tasks has a long history, extending it to learning tasks with weaker supervisory information has only been studied very recently. In particular, ..."
Abstract
 Add to MetaCart
learning problem as a kernel learning problem and solve it efficiently by kernel matrix adaptation. Experimental results based on synthetic and realworld data sets show that our approach is promising for semisupervised metric learning. Keywords: metric learning, kernel learning, semisupervised learning
A.: Combining Multiple Kernels by Augmenting the Kernel Matrix
 In: International Workshop on MCS (2010
"... Abstract. In this paper we present a novel approach to combining multiple kernels where the kernels are computed from different information channels. In contrast to traditional methods that learn a linear combination of n kernels of size m × m, resulting in m coefficients in the trained classifier, ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract. In this paper we present a novel approach to combining multiple kernels where the kernels are computed from different information channels. In contrast to traditional methods that learn a linear combination of n kernels of size m × m, resulting in m coefficients in the trained classifier
Results 11  20
of
200,549