Results

**1 - 1**of**1**### ABSTRACT OF THE DISSERTATION Local Planning for Continuous Markov Decision Processes

, 2014

"... In this dissertation, algorithms that create plans to maximize a numeric reward over time are discussed. A general formulation of this problem is in terms of reinforcement learning (RL), which has traditionally been restricted to small discrete domains. Here, we are concerned instead with domains th ..."

Abstract
- Add to MetaCart

(Show Context)
In this dissertation, algorithms that create plans to maximize a numeric reward over time are discussed. A general formulation of this problem is in terms of reinforcement learning (RL), which has traditionally been restricted to small discrete domains. Here, we are concerned instead with domains that violate this assumption, as we assume domains are both continuous and high dimen-sional. Problems of swimming, riding a bicycle, and walking are concrete examples of domains satisfying these assumptions, and simulations of these problems are tackled here. To perform planning in continuous domains, it has become common practice to use discrete planners after uniformly discretizing dimensions of the problem, leading to an exponential growth in problem size as dimension increases. Furthermore, traditional methods develop a policy for the entire domain simultaneously, but have at best polynomial planning costs in the size of the problem, which (as mentioned) grows exponentially with ii respect to dimension when uniform discretization is performed. To sidestep