## Tree-based batch mode reinforcement learning (2005)

### Cached

### Download Links

- [www.montefiore.ulg.ac.be]
- [www.montefiore.ulg.ac.be]
- [jmlr.csail.mit.edu]
- [jmlr.org]
- DBLP

### Other Repositories/Bibliography

Venue: | Journal of Machine Learning Research |

Citations: | 134 - 28 self |

### BibTeX

@ARTICLE{Ernst05tree-basedbatch,

author = {Damien Ernst and Pierre Geurts and Louis Wehenkel and L. Littman},

title = {Tree-based batch mode reinforcement learning},

journal = {Journal of Machine Learning Research},

year = {2005},

volume = {6},

pages = {503--556}

}

### Years of Citing Articles

### OpenURL

### Abstract

Reinforcement learning aims to determine an optimal control policy from interaction with a system or from observations gathered from a system. In batch mode, it can be achieved by approximating the so-called Q-function based on a set of four-tuples (xt,ut,rt,xt+1) where xt denotes the system state at time t, ut the control action taken, rt the instantaneous reward obtained and xt+1 the successor state of the system, and by determining the control policy from this Q-function. The Q-function approximation may be obtained from the limit of a sequence of (batch mode) supervised learning problems. Within this framework we describe the use of several classical tree-based supervised learning methods (CART, Kd-tree, tree bagging) and two newly proposed ensemble algorithms, namely extremely and totally randomized trees. We study their performances on several examples and find that the ensemble methods based on regression trees perform well in extracting relevant information about the optimal control policy from sets of four-tuples. In particular, the totally randomized trees give good results while ensuring the convergence of the sequence, whereas by relaxing the convergence constraint even better accuracy results are provided by the extremely randomized trees.