Results 1 
3 of
3
Marginalized kernels between labeled graphs
 Proceedings of the Twentieth International Conference on Machine Learning
, 2003
"... A new kernel function between two labeled graphs is presented. Feature vectors are defined as the counts of label paths produced by random walks on graphs. The kernel computation finally boils down to obtaining the stationary state of a discretetime linear system, thus is efficiently performed by s ..."
Abstract

Cited by 146 (14 self)
 Add to MetaCart
A new kernel function between two labeled graphs is presented. Feature vectors are defined as the counts of label paths produced by random walks on graphs. The kernel computation finally boils down to obtaining the stationary state of a discretetime linear system, thus is efficiently performed by solving simultaneous linear equations. Our kernel is based on an infinite dimensional feature space, so it is fundamentally different from other string or tree kernels based on dynamic programming. We will present promising empirical results in classification of chemical compounds. 1 1.
The Skew Spectrum of Graphs
, 2008
"... The central issue in representing graphstructured data instances in learning algorithms is designing features which are invariant to permuting the numbering of the vertices. We present a new system of invariant graph features which we call the skew spectrum of graphs. The skew spectrum is based on m ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The central issue in representing graphstructured data instances in learning algorithms is designing features which are invariant to permuting the numbering of the vertices. We present a new system of invariant graph features which we call the skew spectrum of graphs. The skew spectrum is based on mapping the adjacency matrix of any (weigted, directed, unlabeled) graph to a function on the symmetric group and computing bispectral invariants. The reduced form of the skew spectrum is computable in O(n³) time, and experiments show that on several benchmark datasets it can outperform state of the art graph kernels.
Submitted 05/08; Published xx/08 Graph Kernels
, 807
"... We present a unified framework to study graph kernels, special cases of which include the random walk graph kernel (Gärtner et al., 2003; Borgwardt et al., 2005), marginalized graph kernel (Kashima et al., 2003, 2004; Mahé et al., 2004), and geometric kernel on graphs (Gärtner, 2002). Through extens ..."
Abstract
 Add to MetaCart
We present a unified framework to study graph kernels, special cases of which include the random walk graph kernel (Gärtner et al., 2003; Borgwardt et al., 2005), marginalized graph kernel (Kashima et al., 2003, 2004; Mahé et al., 2004), and geometric kernel on graphs (Gärtner, 2002). Through extensions of linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) and reduction to a Sylvester equation, we construct an algorithm that improves the time complexity of kernel computation from O(n6) to O(n3). When the graphs are sparse, conjugate gradient solvers or fixedpoint iterations bring our algorithm into the subcubic domain. Experiments on graphs from bioinformatics and other application domains show that it is often more than a thousand times faster than previous approaches. We then explore connections between diffusion kernels (Kondor and Lafferty, 2002), regularization on graphs (Smola and Kondor, 2003), and graph kernels, and use these connections to propose new graph kernels. Finally, we show that rational kernels (Cortes et al., 2002, 2003, 2004) when specialized to graphs reduce to the random walk graph kernel.