Results 1  10
of
33
On constraint sampling in the linear programming approach to approximate dynamic programming
 Mathematics of Operations Research
, 2004
"... doi 10.1287/moor.1040.0094 ..."
Approximate Solutions to Markov Decision Processes
, 1999
"... One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, ..."
Abstract

Cited by 82 (10 self)
 Add to MetaCart
One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, since the results of its actions are not completely predictable, it is not enough just to compute the correct sequence; instead the robot must sense and correct for deviations from its intended path. In order for any machine learner to act reasonably in an uncertain environment, it must solve problems like the above one quickly and reliably. Unfortunately, the world is often so complicated that it is difficult or impossible to find the optimal sequence of actions to achieve a given goal. So, in order to scale our learners up to realworld problems, we usually must settle for approximate solutions. One representation for a learner's environment and goals is a Markov decision process or MDP. ...
Nearoptimal Character Animation with Continuous Control
 ACM TRANSACTIONS ON GRAPHICS (SIGGRAPH 2007).
, 2007
"... We present a new approach to realtime character animation with interactive control. Given a corpus of motion capture data and a desired task, we automatically compute nearoptimal controllers using a lowdimensional basis representation. We show that these controllers produce motion that fluidly r ..."
Abstract

Cited by 64 (9 self)
 Add to MetaCart
We present a new approach to realtime character animation with interactive control. Given a corpus of motion capture data and a desired task, we automatically compute nearoptimal controllers using a lowdimensional basis representation. We show that these controllers produce motion that fluidly responds to several dimensions of user control and environmental constraints in realtime. Our results indicate that very few basis functions are required to create highfidelity character controllers which permit complex user navigation and obstacleavoidance tasks.
Greedy linear valueapproximation for factored Markov decision processes
 In Proceedings of the 18th National Conference on Artificial Intelligence
, 2002
"... Significant recent work has focused on using linear representations to approximate value functions for factored Markov decision processes (MDPs). Current research has adopted linear programming as an effective means to calculate approximations for a given set of basis functions, tackling very la ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
Significant recent work has focused on using linear representations to approximate value functions for factored Markov decision processes (MDPs). Current research has adopted linear programming as an effective means to calculate approximations for a given set of basis functions, tackling very large MDPs as a result. However, a number of issues remain unresolved: How accurate are the approximations produced by linear programs? How hard is it to produce better approximations ? and Where do the basis functions come from? To address these questions, we first investigate the complexity of minimizing the Bellman error of a linear value function approximationshowing that this is an inherently hard problem.
Tetris: A study of randomized constraint sampling
 Probabilistic and Randomized Methods for Design Under Uncertainty
, 1994
"... Randomized constraint sampling has recently been proposed as an approach for approximating solutions to optimization problems when the number of constraints is intractable – say, a googol or even infinity. The idea is to define a probability distribution ψ over the set of constraints and to sample a ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
Randomized constraint sampling has recently been proposed as an approach for approximating solutions to optimization problems when the number of constraints is intractable – say, a googol or even infinity. The idea is to define a probability distribution ψ over the set of constraints and to sample a subset
An approximate dynamic programming approach to solving dynamic oligopoly models
, 2010
"... In this paper we introduce a new method to approximate Markov perfect equilibrium in large scale Ericson and Pakes (1995)style dynamic oligopoly models that are not amenable to exact solution due to the curse of dimensionality. The method is based on an algorithm that iterates an approximate best r ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
In this paper we introduce a new method to approximate Markov perfect equilibrium in large scale Ericson and Pakes (1995)style dynamic oligopoly models that are not amenable to exact solution due to the curse of dimensionality. The method is based on an algorithm that iterates an approximate best response operator using an approximate dynamic programming approach based on linear programming. We provide results that lend theoretical support to our approximation. We test our method on an important class of models based on Pakes and McGuire (1994). Our results suggest that the approach we propose significantly expands the set of dynamic oligopoly models that can be analyzed computationally.
A CostShaping Linear Program for AverageCost Approximate Dynamic Programming with Performance Guarantees
, 2006
"... ..."
(Show Context)
Incremental plan aggregation for generating policies in MDPs
 In AAMAS 2010
, 2010
"... Despite the recent advances in planning with MDPs, the problem of generating good policies is still hard. This paper describes a way to generate policies in MDPs by (1) determinizing the given MDP model into a classical planning problem; (2) building partial policies offline by producing solution p ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Despite the recent advances in planning with MDPs, the problem of generating good policies is still hard. This paper describes a way to generate policies in MDPs by (1) determinizing the given MDP model into a classical planning problem; (2) building partial policies offline by producing solution plans to the classical planning problem and incrementally aggregating them into a policy, and (3) using sequential MonteCarlo (MC) simulations of the partial policies before execution, in order to assess the probability of replanning for a policy during execution. The objective of this approach is to quickly generate policies whose probability of replanning is low and below a given threshold. We describe our planner RFF, which incorporates the above ideas. We present theorems showing the termination, soundness and completeness properties of RFF. RFF was the winner of the
Focussed dynamic programming: Extensive comparative results,” Robotics
, 2004
"... We present a heuristicbased propagation algorithm for solving restricted Markov decision processes (MDPs). Our approach, which combines ideas from deterministic search and recent dynamic programming methods, focusses computation towards promising areas of the state space. It is thus able to signifi ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
We present a heuristicbased propagation algorithm for solving restricted Markov decision processes (MDPs). Our approach, which combines ideas from deterministic search and recent dynamic programming methods, focusses computation towards promising areas of the state space. It is thus able to significantly reduce the amount of processing required in producing a solution. We present a number of results comparing our approach to existing algorithms on a robotic path planning domain.
Firstorder decisiontheoretic planning in structured relational environments
, 2008
"... We consider the general framework of firstorder decisiontheoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We consider the general framework of firstorder decisiontheoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the resulting ground Markov decision process (MDP). Unfortunately, the space and time complexity of these solution algorithms scale linearly with the domain size in the best case and exponentially in the worst case. An alternate approach to grounding a relational planning problem is to lift it to a firstorder MDP (FOMDP) specification. This FOMDP can then be solved directly, resulting in a domainindependent solution whose space and time complexity either do not scale with domain size or can scale sublinearly in the domain size. However, such generality does not come without its own set of challenges and the first purpose of this thesis is to explore exact and approximate solution techniques for practically solving FOMDPs. The second purpose of this thesis is to extend the FOMDP specification to succinctly capture factored actions and additive rewards while extending the exact and approximate solution techniques to directly exploit this structure. In addition, we provide a proof of correctness of the firstorder symbolic dynamic programming approach w.r.t. its wellstudied ground MDP