Results 1  10
of
200
Imitation Learning in Relational Domains: A FunctionalGradient Boosting Approach
"... Imitation learning refers to the problem of learning how to behave by observing a teacher in action. We consider imitation learning in relational domains, in which there is a varying number of objects and relations among them. In prior work, simple relational policies are learned by viewing imitatio ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
, and better represent our prior beliefs about the form of the function. Building on recent generalizations of functional gradient boosting to relational representations, we implement a functional gradient boosting approach to imitation learning in relational domains. In particular, given a set of traces from
Imitation Learning in Relational Domains Using Functional Gradient Boosting
"... It is common knowledge that both humans and animals learn new skills by observing others. This problem, which is called imitation learning, can be formulated as learning a representation of a policy – a mapping from states to actions – from examples of that policy. Our focus is on relational domains ..."
Abstract
 Add to MetaCart
deterministic policy to imitate the expert, we learn a stochastic policy where the probability of an action given a state is represented by a sum of potential functions. Second, we leverage the recently developed functionalgradient boosting approach to learn a set of regression trees, each of which represents
Learning Markov Logic Networks via Functional Gradient Boosting
"... Abstract—Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent example is Markov Logic Networks (MLNs). While MLNs are indeed highly expressive, this expressiveness comes at a cost. Learning MLNs is a hard prob ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
both the weights and the structure of the MLN simultaneously. Our approach is based on functional gradient boosting where the problem of learning MLNs is turned into a series of relational functional approximation problems. We use two kinds of representations for the gradients: clausebased and tree
Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence Imitation Learning in Relational Domains: A FunctionalGradient Boosting Approach
"... Imitation learning refers to the problem of learning how to behave by observing a teacher in action. We consider imitation learning in relational domains, in which there is a varying number of objects and relations among them. In prior work, simple relational policies are learned by viewing imitatio ..."
Abstract
 Add to MetaCart
, and better represent our prior beliefs about the form of the function. Building on recent generalizations of functional gradient boosting to relational representations, we implement a functional gradient boosting approach to imitation learning in relational domains. In particular, given a set of traces from
Greedy Function Approximation: A Gradient Boosting Machine
 Annals of Statistics
, 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract

Cited by 1000 (13 self)
 Add to MetaCart
Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed
Soft Margins for AdaBoost
, 1998
"... Recently ensemble methods like AdaBoost were successfully applied to character recognition tasks, seemingly defying the problems of overfitting. This paper shows that although AdaBoost rarely overfits in the low noise regime it clearly does so for higher noise levels. Central for understanding this ..."
Abstract

Cited by 333 (24 self)
 Add to MetaCart
this fact is the margin distribution and we find that AdaBoost achieves  doing gradient descent in an error function with respect to the margin  asymptotically a hard margin distribution, i.e. the algorithm concentrates its resources on a few hardtolearn patterns (here an interesting overlap emerge
A General Boosting Method and its Application to Learning Ranking Functions for Web Search Neur
 Inf. Proc. Sys. Conf
, 2008
"... We present a general boosting method extending functional gradient boosting to optimize complex loss functions that are encountered in many machine learning problems. Our approach is based on optimization of quadratic upper bounds of the loss functions which allows us to present a rigorous convergen ..."
Abstract

Cited by 87 (16 self)
 Add to MetaCart
We present a general boosting method extending functional gradient boosting to optimize complex loss functions that are encountered in many machine learning problems. Our approach is based on optimization of quadratic upper bounds of the loss functions which allows us to present a rigorous
Stochastic Gradient Boosting
 Computational Statistics and Data Analysis
, 1999
"... Gradient boosting constructs additive regression models by sequentially fitting a simple parameterized function (base learner) to current "pseudo"residuals by leastsquares at each iteration. The pseudoresiduals are the gradient of the loss functional being minimized, with respect to ..."
Abstract

Cited by 285 (1 self)
 Add to MetaCart
Gradient boosting constructs additive regression models by sequentially fitting a simple parameterized function (base learner) to current "pseudo"residuals by leastsquares at each iteration. The pseudoresiduals are the gradient of the loss functional being minimized, with respect
License GPL2
, 2014
"... Description Distributed gradient boosting based on the mboost package. The parboost package is designed to scale up componentwise functional gradient boosting in a distributed memory environment by splitting the observations into disjoint subsets, or alternatively using bootstrap samples (bagging). ..."
Abstract
 Add to MetaCart
Description Distributed gradient boosting based on the mboost package. The parboost package is designed to scale up componentwise functional gradient boosting in a distributed memory environment by splitting the observations into disjoint subsets, or alternatively using bootstrap samples (bagging
Learning for efficient retrieval of structured data with noisy queries
 In ICML ’07: The Twentyforth International Conference on Machine Learning
, 2007
"... Increasingly large collections of structured data necessitate the development of efficient, noisetolerant retrieval tools. In this work, we consider this issue and describe an approach to learn a similarity function that is not only accurate, but that also increases the effectiveness of retrieval d ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
data structures. We present an algorithm that uses functional gradient boosting to maximize both retrieval accuracy and the retrieval efficiency of vantage point trees. We demonstrate the effectiveness of our approach on two datasets, including a moderately sized realworld dataset of folk music. 1.
Results 1  10
of
200