## Positive Definite Kernels in Machine Learning (2009)

### BibTeX

@MISC{Cuturi09positivedefinite,

author = {Marco Cuturi},

title = {Positive Definite Kernels in Machine Learning},

year = {2009}

}

### OpenURL

### Abstract

This survey is an introduction to positive definite kernels and the set of methods they have inspired in the machine learning literature, namely kernel methods. We first discuss some properties of positive definite kernels as well as reproducing kernel Hibert spaces, the natural extension of the set of functions {k(x, ·), x ∈ X} associated with a kernel k defined on a space X. We discuss at length the construction of kernel functions that take advantage of well-known statistical models. We provide an overview of numerous data-analysis methods which take advantage of reproducing kernel Hilbert spaces and discuss the idea of combining several kernels to improve the performance on certain tasks. We also provide a short cookbook of different kernels which are particularly useful for certain datatypes such as images, graphs or speech segments. Remark: This report is a draft. Comments and suggestions will be highly appreciated. Summary We provide in this survey a short introduction to positive definite kernels and the set of methods they have inspired in machine learning, also known as kernel methods. The main idea behind kernel methods is the following. Most datainference tasks aim at defining an appropriate decision function f on a set of objects of interest X. When X is a vector space of dimension d, say R d, linear functions fa(x) = a T x are one of the easiest and better understood choices, notably for regression, classification or dimensionality reduction. Given a positive definite kernel k on X, that is a real-valued function on X × X which quantifies effectively how similar two points x and y are through the value k(x, y), kernel methods are algorithms which estimate functions f of the form f: x ∈ X → f(x) = ∑ αik(xi, x), (1) i∈I