Results 1  10
of
124
Bayesian MultiTask Reinforcement Learning
"... We consider the problem of multitask reinforcement learning where the learner is provided with a set of tasks, for which only a smallnumber ofsamplescanbe generatedfor any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would be necessary t ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We consider the problem of multitask reinforcement learning where the learner is provided with a set of tasks, for which only a smallnumber ofsamplescanbe generatedfor any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would be necessary
Bayesian MultiTask Reinforcement Learning
"... We consider the problem of multitask reinforcement learning where the learner is provided with a set of tasks, for which only a small number of samples can be generated for any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would be necess ..."
Abstract
 Add to MetaCart
We consider the problem of multitask reinforcement learning where the learner is provided with a set of tasks, for which only a small number of samples can be generated for any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would
MultiTask Learning of Gaussian Graphical Models
"... We present multitask structure learning for Gaussian graphical models. We discuss uniqueness and boundedness of the optimal solution of the maximization problem. A block coordinate descent method leads to a provably convergent algorithm that generates a sequence of positive definite solutions. Thus ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We present multitask structure learning for Gaussian graphical models. We discuss uniqueness and boundedness of the optimal solution of the maximization problem. A block coordinate descent method leads to a provably convergent algorithm that generates a sequence of positive definite solutions
Flexible Latent Variable Models for MultiTask Learning
"... Summary. Given multiple prediction problems such as regression and classification, we are interested in a joint inference framework which can effectively borrow information among tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this p ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
learning setting and transfer learning setting. Key words: multitask learning, latent variable models, hierarchical Bayesian models, model selection, transfer learning 1
Efficient Reinforcement Learning in Factored MDPs
, 1999
"... We present a provably efficient and nearoptimal algorithm for reinforcement learning in Markov decision processes (MDPs) whose transition model can be factored as a dynamic Bayesian network (DBN). Our algorithm generalizes the recent algorithm of Kearns and Singh, and assumes that we are given both ..."
Abstract

Cited by 87 (3 self)
 Add to MetaCart
We present a provably efficient and nearoptimal algorithm for reinforcement learning in Markov decision processes (MDPs) whose transition model can be factored as a dynamic Bayesian network (DBN). Our algorithm generalizes the recent algorithm of Kearns and Singh, and assumes that we are given
Provably Efficient Learning with Typed Parametric Models
"... To quickly achieve good performance, reinforcementlearning algorithms for acting in large continuousvalued domains must use a representation that is both sufficiently powerful to capture important domain characteristics, and yet simultaneously allows generalization, or sharing, among experiences. ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
To quickly achieve good performance, reinforcementlearning algorithms for acting in large continuousvalued domains must use a representation that is both sufficiently powerful to capture important domain characteristics, and yet simultaneously allows generalization, or sharing, among experiences
Autonomous transfer for reinforcement learning
 In The Seventh International Joint Conference on Autonomous Agents and Multiagent Systems
, 2008
"... Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An auto ..."
Abstract

Cited by 34 (12 self)
 Add to MetaCart
Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other
Convex Multitask Learning with Flexible Task Clusters
"... Traditionally, multitask learning (MTL) assumes that all the tasks are related. This can lead to negative transfer when tasks are indeed incoherent. Recently, a number of approaches have been proposed that alleviate this problem by discovering the underlying task clusters or relationships. However, ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Traditionally, multitask learning (MTL) assumes that all the tasks are related. This can lead to negative transfer when tasks are indeed incoherent. Recently, a number of approaches have been proposed that alleviate this problem by discovering the underlying task clusters or relationships. However
Transferring instances for modelbased reinforcement learning
 AAMAS 2008 Workshop on Adaptive Learning Agents and MultiAgent Systems
, 2008
"... Reinforcement learning agents typically require a significant amount of data before performing well on complex tasks. Transfer learning methods have made progress reducing sample complexity, but they have only been applied to modelfree learning methods, not more dataefficient modelbased learning ..."
Abstract

Cited by 42 (10 self)
 Add to MetaCart
Reinforcement learning agents typically require a significant amount of data before performing well on complex tasks. Transfer learning methods have made progress reducing sample complexity, but they have only been applied to modelfree learning methods, not more dataefficient modelbased learning
JMLR: Workshop and Conference Proceedings 27:145–154, 2012 Workshop on Unsupervised and Transfer Learning Selfmeasuring Similarity for Multitask Gaussian Process
"... Multitask learning aims at transferring knowledge between similar tasks. The multitask Gaussian process framework of Bonilla et al. models (incomplete) responses of C data points for R tasks (e.g., the responses are given by an R × C matrix) by using a Gaussian process; the covariance function tak ..."
Abstract
 Add to MetaCart
Multitask learning aims at transferring knowledge between similar tasks. The multitask Gaussian process framework of Bonilla et al. models (incomplete) responses of C data points for R tasks (e.g., the responses are given by an R × C matrix) by using a Gaussian process; the covariance function
Results 1  10
of
124