## Incremental Markov-Model Planning (1996)

Venue: | In Proceedings of TAI-96, Eighth IEEE International Conference on Tools With Artificial Intelligence |

Citations: | 13 - 2 self |

### BibTeX

@INPROCEEDINGS{Washington96incrementalmarkov-model,

author = {Richard Washington},

title = {Incremental Markov-Model Planning},

booktitle = {In Proceedings of TAI-96, Eighth IEEE International Conference on Tools With Artificial Intelligence},

year = {1996},

pages = {41--47}

}

### OpenURL

### Abstract

This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. The partially observable solution is incrementally constructed by considering increasing amounts of information from observations. The base solution directs the expansion of the plan by providing an evaluation function for the search fringe. We show that incremental observation moves from the base solution towards the complete solution, allowing the planner to model the uncertainty about action outcomes and observations that are present in real domains. 1 Introduction For domains with uncertainty about action outcomes and incomplete information about the current state, partially observable Markov decision processes (POMDPs) are appropriate for capturing the dynamics of the domain. The difficulty is that complete, precise solutions to POMDPs are available only for very small problems. On the other hand, fully o...

### Citations

557 |
Principles of artificial intelligence
- Nilsson
- 1982
(Show Context)
Citation Context ...maximizing the utility of the chosen plan of actions, we can invert the utilities (making them disutilities). This now presents the goal of minimizing the disutility of the plan, so the AO* algorithm =-=[14]-=- can be used to expand the tree incrementally. The AO* algorithm still involves a non-deterministic choice of which AND branch along the current best path to further expand (since that does not fall o... |

275 | Acting optimally in partially observable stochastic domains
- Cassandra, Kaelbling, et al.
- 1994
(Show Context)
Citation Context ...ions to POMDPs that ignore information-gathering actions [17]. The POMDP literature, on the other hand, makes it obvious that precise solutions to POMDPs are available for only trivially small models =-=[13, 3]-=-. Available approximations are only powerful enough to scale up to problems with tens of states [18, 11, 16]. However, even simple tasks can require thousands of states (see [17]). The approach presen... |

253 | Probabilistic robot navigation in partially observable environments
- Simmons, Koenig
- 1995
(Show Context)
Citation Context ...signed to do. Existing research on Markov approaches rely either on FOMDPs that assume perfect sensor knowledge [6, 2], or 0th-order approximations to POMDPs that ignore information-gathering actions =-=[17]-=-. The POMDP literature, on the other hand, makes it obvious that precise solutions to POMDPs are available for only trivially small models [13, 3]. Available approximations are only powerful enough to... |

232 | Learning Policies for Partially Observable Environments: Scaling Up
- Littman, Cassandra, et al.
- 1995
(Show Context)
Citation Context ...any of the existing Markov approaches). Define a value function of a state distribution �� as a weighted sum of the FOMDP value functions: VF (��) = X i2N �� i v(i) (this is the approach s=-=uggested in [10] and [17-=-]). Now, using the value functions for the finite-horizon case, where the value is computed for n steps rather than an infinite number, we have the following equations: V 0 P (��) = �� \Delta ... |

191 |
A survey of partially observable Markov decision processes: Theory, models, and algorithms
- Monahan
- 1982
(Show Context)
Citation Context ...ions to POMDPs that ignore information-gathering actions [17]. The POMDP literature, on the other hand, makes it obvious that precise solutions to POMDPs are available for only trivially small models =-=[13, 3]-=-. Available approximations are only powerful enough to scale up to problems with tens of states [18, 11, 16]. However, even simple tasks can require thousands of states (see [17]). The approach presen... |

175 |
A survey of algorithmic methods for partially observable Markov decision processes
- Lovejoy
- 1991
(Show Context)
Citation Context ... makes it obvious that precise solutions to POMDPs are available for only trivially small models [13, 3]. Available approximations are only powerful enough to scale up to problems with tens of states =-=[18, 11, 16]-=-. However, even simple tasks can require thousands of states (see [17]). The approach presented here shows that an FOMDP solution can produce estimates that lead to an incremental POMDP solution. Firs... |

161 | Planning under time constraints in stochastic domains
- Dean, Kaelbling, et al.
- 1995
(Show Context)
Citation Context ...f this uncertainty, and given the sensor information. This is exactly what POMDPs are designed to do. Existing research on Markov approaches rely either on FOMDPs that assume perfect sensor knowledge =-=[6, 2]-=-, or 0th-order approximations to POMDPs that ignore information-gathering actions [17]. The POMDP literature, on the other hand, makes it obvious that precise solutions to POMDPs are available for onl... |

155 |
Gross Motion Planning - A Survey
- HWANG, AHUJA
- 1992
(Show Context)
Citation Context ...ation to infer its location in the world, and then uses that location to calculate the best route to take. Given perfect information, there is a wide array of approaches that work well for navigation =-=[15, 9, 8]-=-, but in reality very few work well when the robot is unsure of its location. Those that do work often rely on specific features or models of the robot or environment that rarely generalize [12, 5]. I... |

120 | Approximating optimal policies for partially observable stochastic domains
- Parr, Russell
- 1995
(Show Context)
Citation Context ... makes it obvious that precise solutions to POMDPs are available for only trivially small models [13, 3]. Available approximations are only powerful enough to scale up to problems with tens of states =-=[18, 11, 16]-=-. However, even simple tasks can require thousands of states (see [17]). The approach presented here shows that an FOMDP solution can produce estimates that lead to an incremental POMDP solution. Firs... |

91 |
A 'retraction' method for planning the motion of a disc
- O'Dunlaing, Yap
- 1985
(Show Context)
Citation Context ...ation to infer its location in the world, and then uses that location to calculate the best route to take. Given perfect information, there is a wide array of approaches that work well for navigation =-=[15, 9, 8]-=-, but in reality very few work well when the robot is unsure of its location. Those that do work often rely on specific features or models of the robot or environment that rarely generalize [12, 5]. I... |

69 | Using abstractions for decision-theoretic planning with time constraints
- Boutilier, Dearden
- 1994
(Show Context)
Citation Context ...f this uncertainty, and given the sensor information. This is exactly what POMDPs are designed to do. Existing research on Markov approaches rely either on FOMDPs that assume perfect sensor knowledge =-=[6, 2]-=-, or 0th-order approximations to POMDPs that ignore information-gathering actions [17]. The POMDP literature, on the other hand, makes it obvious that precise solutions to POMDPs are available for onl... |

26 |
Partially observed Markov decision processes: A survey
- White
- 1991
(Show Context)
Citation Context ... makes it obvious that precise solutions to POMDPs are available for only trivially small models [13, 3]. Available approximations are only powerful enough to scale up to problems with tens of states =-=[18, 11, 16]-=-. However, even simple tasks can require thousands of states (see [17]). The approach presented here shows that an FOMDP solution can produce estimates that lead to an incremental POMDP solution. Firs... |

24 |
An efficient motion planning algorithm for a convex rigid polygonal object in 2-dimensional polygonal space
- Kedem, Sharir
- 1990
(Show Context)
Citation Context ...ation to infer its location in the world, and then uses that location to calculate the best route to take. Given perfect information, there is a wide array of approaches that work well for navigation =-=[15, 9, 8]-=-, but in reality very few work well when the robot is unsure of its location. Those that do work often rely on specific features or models of the robot or environment that rarely generalize [12, 5]. I... |

7 |
A confidence set approach to mobile robot localization
- Mandelbaum, Mintz
- 1996
(Show Context)
Citation Context ...[15, 9, 8], but in reality very few work well when the robot is unsure of its location. Those that do work often rely on specific features or models of the robot or environment that rarely generalize =-=[12, 5]-=-. In fact, what the robot needs to do is calculate the optimal action to take in face of this uncertainty, and given the sensor information. This is exactly what POMDPs are designed to do. Existing re... |

6 |
The cost effectiveness of cervical cancer screening for the elderly. Ann Intern Med 17:520–527
- MC, Mandellblatt, et al.
- 1992
(Show Context)
Citation Context ...ility. These probabilities are reported in the medical literature (e.g., see [4]) and can be used to produce Markov models of disease progression, modeling such features as survival rate [4] and cost =-=[7]-=-. However, in these models the patient's state is modeled probabilistically based on reported occurrences in the general population or a specific patient population. The role of diagnostic tests for i... |

5 |
The Markov process in medical prognosis
- Beck, Pauker
- 1983
(Show Context)
Citation Context ... current state of the patient is absent from these models; when used, it appears in a decision tree, with the leaves being Markov models of the outcomes based on the results of a diagnostic procedure =-=[1]-=-. However, actual medical practice interleaves diagnostic and therapeutic actions so that the most appropriate procedure is done at each time (given the available knowledge). Given two possible condit... |

2 |
A Markov model of the natural history of prostate cancer
- Cowen, Chartrand, et al.
- 1994
(Show Context)
Citation Context ... as a Markov process, with medical procedures and natural processes moving the patient from state to state with some probability. These probabilities are reported in the medical literature (e.g., see =-=[4]-=-) and can be used to produce Markov models of disease progression, modeling such features as survival rate [4] and cost [7]. However, in these models the patient's state is modeled probabilistically b... |

2 |
Sensor-based localization for wheeled mobile robots
- Curran, Kyriakopolous
- 1993
(Show Context)
Citation Context ...[15, 9, 8], but in reality very few work well when the robot is unsure of its location. Those that do work often rely on specific features or models of the robot or environment that rarely generalize =-=[12, 5]-=-. In fact, what the robot needs to do is calculate the optimal action to take in face of this uncertainty, and given the sensor information. This is exactly what POMDPs are designed to do. Existing re... |