• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

New Representations and Approximations for Sequential Decision Making (2007)

by Tao Wang
Add To MetaCart

Tools

Sorted by:
Results 1 - 2 of 2

Approximate Dynamic Programming By Minimizing Distributionally Robust Bounds

by Marek Petrik
"... Approximate dynamic programming is a popular method for solving large Markov decision processes. This paper describes a new class of approximate dynamic programming (ADP) methods—distributionally robust ADP—that address the curse of dimensionality by minimizing a pessimistic bound on the policy loss ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Approximate dynamic programming is a popular method for solving large Markov decision processes. This paper describes a new class of approximate dynamic programming (ADP) methods—distributionally robust ADP—that address the curse of dimensionality by minimizing a pessimistic bound on the policy loss. This approach turns ADP into an optimization problem, for which we derive new mathematical program formulations and analyze its properties. DRADP improves on the theoretical guarantees of existing ADP methods—it guarantees convergence and L1 norm-based error bounds. The empirical evaluation of DRADP shows that the theoretical guarantees translate well into good performance on benchmark problems. 1.
(Show Context)

Citation Context

...pent in the state and is in some sense the dual of a value function. Occupancy frequencies have been used, for example, to solve factored MDPs (Dolgov & Durfee, 2006) and in dual dynamic programming (=-=Wang, 2007-=-; Wang et al., 2008) (The term “dual dynamic programming” also refers to unrelated linear stochastic programming methods). These methods can improve the empirical performance, but proving bounds on th...

Dual Representations for Dynamic Programming

by Tao Wang, Daniel Lizotte, Michael Bowling, Dale Schuurmans
"... We propose a dual approach to dynamic programming and reinforcement learning, based on maintaining an explicit representation of visit distributions as opposed to value functions. An advantage of working in the dual is that it allows one to exploit techniques for representing, approximating, and est ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
We propose a dual approach to dynamic programming and reinforcement learning, based on maintaining an explicit representation of visit distributions as opposed to value functions. An advantage of working in the dual is that it allows one to exploit techniques for representing, approximating, and estimating probability distributions, while also avoiding any risk of divergence. We begin by formulating a modified dual of the standard linear program that guarantees the solution is a globally normalized visit distribution. Using this alternative representation, we then derive dual forms of dynamic programming, including on-policy updating, policy improvement and off-policy updating, and furthermore show how to incorporate function approximation. We then investigate the convergence properties of these algorithms, both theoretically and empirically, and show that the dual approach remains stable in situations when primal value function approximation diverges. Overall, the dual approach offers a viable alternative to standard dynamic programming techniques and offers new avenues for developing algorithms for sequential decision making.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University