Results 1  10
of
12
A Survey of Kernels for Structured Data
"... Kernel methods in general and support vector machines in particular have been successful in various learning tasks on data represented in a single table. Much 'realworld ' data, however, is structured it has no natural representation in a single table. Usually, to apply kernel methods to 'realworl ..."
Abstract

Cited by 114 (3 self)
 Add to MetaCart
Kernel methods in general and support vector machines in particular have been successful in various learning tasks on data represented in a single table. Much 'realworld ' data, however, is structured it has no natural representation in a single table. Usually, to apply kernel methods to 'realworld' data, extensive preprocessing is performed toembed the data into areal vector space and thus in a single table. This survey describes several approaches ofdefining positive definite kernels on structured instances directly.
Efficiency versus Convergence of Boolean Kernels for OnLine Learning Algorithms
 Advances in Neural Information Processing Systems 14
, 2001
"... We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features.
Maximum margin algorithms with Boolean kernels
 In Proceedings of the Sixteenth Annual Conference on Computational Learning Theory
, 2003
"... Abstract. Recent work has introduced Boolean kernels with which one can learn over a feature space containing all conjunctions of length up to k (for any 1 ≤ k ≤ n) over the original n Boolean features in the input space. This motivates the question of whether maximum margin algorithms such as suppo ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract. Recent work has introduced Boolean kernels with which one can learn over a feature space containing all conjunctions of length up to k (for any 1 ≤ k ≤ n) over the original n Boolean features in the input space. This motivates the question of whether maximum margin algorithms such as support vector machines can learn Disjunctive Normal Form expressions in the PAC learning model using this kernel. We study this question, as well as a variant in which structural risk minimization (SRM) is performed where the class hierarchy is taken over the length of conjunctions. We show that such maximum margin algorithms do not PAC learn t(n)term DNF for any t(n) = ω(1), even when used with such a SRM scheme. We also consider PAC learning under the uniform distribution and show that if the kernel uses conjunctions of length ˜ω ( √ n) then the maximum margin hypothesis will fail on the uniform distribution as well. Our results concretely illustrate that margin based algorithms may overfit when learning simple target functions with natural kernels.
Using kernel perceptrons to learn action effects for planning
 in International Conference on Cognitive Systems (CogSys
, 2008
"... Abstract — We investigate the problem of learning action effects in STRIPS and ADL planning domains. Our approach is based on a kernel perceptron learning model, where action and state information is encoded in a compact vector representation as input to the learning mechanism, and resulting state c ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract — We investigate the problem of learning action effects in STRIPS and ADL planning domains. Our approach is based on a kernel perceptron learning model, where action and state information is encoded in a compact vector representation as input to the learning mechanism, and resulting state changes are produced as output. Empirical results of our approach indicate efficient training and prediction times, with low average error rates (< 3%) when tested on STRIPS and ADL versions of an object manipulation scenario. This work is part of a project to integrate machine learning techniques with a planning system, as part of a larger cognitive architecture linking a highlevel reasoning component with a lowlevel robot/vision system. I.
Learning STRIPS Operators from Noisy and Incomplete Observations
"... Agents learning to act autonomously in realworld domains must acquire a model of the dynamics of the domain in which they operate. Learning domain dynamics can be challenging, especially where an agent only has partial access to the world state, and/or noisy external sensors. Even in standard STRIPS ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Agents learning to act autonomously in realworld domains must acquire a model of the dynamics of the domain in which they operate. Learning domain dynamics can be challenging, especially where an agent only has partial access to the world state, and/or noisy external sensors. Even in standard STRIPS domains, existing approaches cannot learn from noisy, incomplete observations typical of realworld domains. We propose a method which learns STRIPS action models in such domains, by decomposing the problem into first learning a transition function between states in the form of a set of classifiers, and then deriving explicit STRIPS rules from the classifiers ’ parameters. We evaluate our approach on simulated standard planning domains from the International Planning Competition, and show that it learns useful domain descriptions from noisy, incomplete observations. 1
Dimension and Margin Bounds for Reflectioninvariant Kernels ∗
"... A kernel over the Boolean domain is said to be reflectioninvariant, if its value does not change when we flip the same bit in both arguments. (Many popular kernels have this property.) We study the geometric margins that can be achieved when we represent a specific Boolean function f by a classifie ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A kernel over the Boolean domain is said to be reflectioninvariant, if its value does not change when we flip the same bit in both arguments. (Many popular kernels have this property.) We study the geometric margins that can be achieved when we represent a specific Boolean function f by a classifier that employs a reflectioninvariant kernel. It turns out ‖ ˆ f‖ ∞ is an upper bound on the average margin. Furthermore, ‖ ˆ f‖−1 ∞ is a lower bound on the smallest dimension of a feature space associated with a reflectioninvariant kernel that allows for a correct representation of f. This is, to the best of our knowledge, the first paper that exhibits margin and dimension bounds for specific functions (as opposed to function families). Several generalizations are considered as well. The main mathematical results are presented in a setting with arbitrary finite domains and a quite general notion of invariance. 1
Online Rule Learning via Weighted Model Counting
"... Abstract. Online multiplicative weightupdate learning algorithms, such as Winnow, have proven to behave remarkably for learning simple disjunctions with few relevant attributes. The aim of this paper is to extend the Winnow algorithm to more expressive concepts characterized by DNF formulas with fe ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Online multiplicative weightupdate learning algorithms, such as Winnow, have proven to behave remarkably for learning simple disjunctions with few relevant attributes. The aim of this paper is to extend the Winnow algorithm to more expressive concepts characterized by DNF formulas with few relevant rules. For such problems, the convergence of Winnow is still fast, since the number of mistakes increases only linearly with the number of attributes. Yet, the learner is confronted with an important computational barrier: during any prediction, it must evaluate the weighted sum of an exponential number of rules. To circumvent this issue, we convert the prediction problem into a Weighted Model Counting problem. The resulting algorithm, SharpNow, is an exact simulation of Winnow equipped with backtracking, caching, and decomposition techniques. Experiments on static and drifting problems demonstrate the performance of the algorithm in terms of accuracy and speed. 1
Efficiency versus Convergence of Boolean Kernels for OnLine Learning Algorithms
"... We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. We demonstrate a tradeoff between the computational efficiency with which these kernels can be computed and the generalization ability of the resulting cla ..."
Abstract
 Add to MetaCart
We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. We demonstrate a tradeoff between the computational efficiency with which these kernels can be computed and the generalization ability of the resulting classifier. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over an exponential number of conjunctions; however we also prove that using such kernels the Perceptron algorithm can make an exponential number of mistakes even when learning simple functions. We also consider an analogous use of kernel functions to run the multiplicativeupdate Winnow algorithm over an expanded feature space of exponentially many conjunctions. While known upper bounds imply that Winnow can learn DNF formulae with a polynomial mistake bound in this setting, we prove that it is computationally hard to simulate Winnow 's behavior for learning DNF over such a feature set, and thus that such kernel functions for Winnow are not efficiently computable. 1
In Proceedings of NIPS'01 Efficiency versus Convergence of Boolean
"... We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. ..."
Abstract
 Add to MetaCart
We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features.
Finding the Best Panoramas
, 2011
"... Google Maps publishes street level panoramic photographs from around the world in the Street View service. When users request street level imagery in a given area, we would like to show the best or most representative imagery from the region. In order to select the best panorama for a region of any ..."
Abstract
 Add to MetaCart
Google Maps publishes street level panoramic photographs from around the world in the Street View service. When users request street level imagery in a given area, we would like to show the best or most representative imagery from the region. In order to select the best panorama for a region of any size, I developed a panorama ranking algorithm. An enhancement to this technique is also described here, leveraging the Alternating Direction Method of Multipliers to create a high throughput distributed online learning algorithm that should allow for instant classification updating based on realtime user traffic. The ranking algorithm was deployed on maps.google.com on Monday, December 12, 2011. For more in depth information on the particular difficulties posed by our work on Google Street View, please refer to [1] and [2]. (a) Chicago