Results 1 
9 of
9
Motion Planning under Uncertainty using Iterative Local Optimization in Belief Space
, 2012
"... We present a new approach to motion planning under sensing and motion uncertainty by computing a locally optimal solution to a continuous partially observable Markov decision process (POMDP). Our approach represent beliefs (the distributions of the robot’s state estimate) by Gaussian distributions a ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
We present a new approach to motion planning under sensing and motion uncertainty by computing a locally optimal solution to a continuous partially observable Markov decision process (POMDP). Our approach represent beliefs (the distributions of the robot’s state estimate) by Gaussian distributions and is applicable to robot systems with nonlinear dynamics and observation models. The method follows the general POMDP solution framework in which we approximate the belief dynamics using an extended Kalman filter and represent the value function by a quadratic function that is valid in the vicinity of a nominal trajectory through belief space. Using a belief space variant of iterative LQG (iLQG), our approach iterates with secondorder convergence towards a linear control policy over the belief space that is locally optimal with respect to a userdefined cost function. Unlike previous work, our approach does not assume maximumlikelihood observations, does not assume fixed estimator or control gains, takes into account obstacles in the environment, and does not require discretization of the state and action spaces. The running time of the algorithm is polynomial (O[n 6]) in the dimension n of the state space. We demonstrate the potential of our approach in simulation for holonomic and nonholonomic robots maneuvering through environments with obstacles with noisy and partial sensing and with nonlinear dynamics and observation models.
FIRM: Samplingbased Feedback Motion Planning Under Motion Uncertainty and Imperfect Measurements
"... In this paper we present FIRM (Feedbackbased Information RoadMap), a multiquery approach for planning under uncertainty, that is a beliefspace variant of Probabilistic Roadmap Methods (PRMs). The crucial feature of FIRM is that the costs associated with the edges are independent of each other, an ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
In this paper we present FIRM (Feedbackbased Information RoadMap), a multiquery approach for planning under uncertainty, that is a beliefspace variant of Probabilistic Roadmap Methods (PRMs). The crucial feature of FIRM is that the costs associated with the edges are independent of each other, and in this sense it is the first method that generates a graph in belief space that preserves the optimal substructure property. From a practical point of view, FIRM is a robust and reliable planning framework. It is robust since the solution is a feedback and there is no need for expensive replanning. It is reliable because accurate collision probabilities can be computed along the edges. In addition, FIRM is a scalable framework, where the complexity of the planning framework. As a concrete instantiation of FIRM, we adopt Stationary Linear Quadratic Gaussian (SLQG) controllers as belief stabilizers and introduce the socalled SLQGFIRM. In SLQGFIRM we focus on kinematic systems and then extend to dynamical systems by sampling in the equilibrium space. We investigate the performance of SLQGFIRM in different scenarios.
Solving continuous pomdps: Value iteration with incremental learning of an efficient space representation
 Proc. Int. Conf. Machine Learning
, 2013
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
Scaling up Gaussian Belief Space Planning through CovarianceFree Trajectory Optimization and Automatic Differentiation
"... Abstract. Belief space planning provides a principled framework to compute motion plans that explicitly gather information from sensing, as necessary, to reduce uncertainty about the robot and the environment. We consider the problem of planning in Gaussian belief spaces, which are parameterized in ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Belief space planning provides a principled framework to compute motion plans that explicitly gather information from sensing, as necessary, to reduce uncertainty about the robot and the environment. We consider the problem of planning in Gaussian belief spaces, which are parameterized in terms of mean states and covariances describing the uncertainty. In this work, we show that it is possible to compute locally optimal plans without including the covariance in direct trajectory optimization formulations of the problem. As a result, the dimensionality of the problem scales linearly in the state dimension instead of quadratically, as would be the case if we were to include the covariance in the optimization. We accomplish this by taking advantage of recent advances in numerical optimal control that include automatic differentiation and state of the art convex solvers. We show that the running time of each optimization step of the covariancefree trajectory optimization is O(n3T), where n is the dimension of the state space and T is the number of time steps in the trajectory. We present experiments in simulation on a variety of planning problems under uncertainty including manipulator planning, estimating unknown model parameters for dynamical systems, and active simultaneous localization and mapping (active SLAM). Our experiments suggest that our method can solve planning problems in 100 dimensional state spaces and obtain computational speedups of 400 × over related trajectory optimization methods. 1
Combining a POMDP Abstraction with Replanning to Solve Complex, PositionDependent Sensing Tasks
"... The PartiallyObservable Markov Decision Process (POMDP) is a general framework to determine rewardmaximizing action policies under noisy action and sensing. However, determining an optimal policy for POMDPs is often intractable for robotic tasks due to the PSPACEcomplete nature of the computation ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The PartiallyObservable Markov Decision Process (POMDP) is a general framework to determine rewardmaximizing action policies under noisy action and sensing. However, determining an optimal policy for POMDPs is often intractable for robotic tasks due to the PSPACEcomplete nature of the computation required. Several recent solvers have been introduced that expand the size of problems that can be considered. Although these POMDP solvers can respect complex motion constraints in theory, we show that the computational cost does not provide a benefit in the eventual online execution, compared to our alternative approach that relies on a policy that ignores some of the motion constraints. We advocate using the POMDP framework where it is critical – to find a policy that provides the optimal action given all past noisy sensor observations, while abstracting some of the motion constraints to reduce solution time. However, the actions of an abstract robot are generally not executable under its true motion constraints. The problem is addressed offline with a lessconstrained POMDP, and navigation under the full system constraints is handled online with replanning. It is empirically demonstrated that the policy generated using this abstracted motion model is faster to compute and achieves similar or higher reward than addressing the motion constraints for a carlike robot as used in our experiments directly in the POMDP.
An online and approximate solver for pomdps with continuous action space
 In IEEE International Conference on Robotics and Automation
, 2015
"... AbstractFor agile, accurate autonomous robotics, it is desirable to plan motion in the presence of uncertainty. The Partially Observable Markov Decision Process (POMDP) provides a principled framework for this. Despite the tremendous advances of POMDPbased planning, most can only solve problems w ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
AbstractFor agile, accurate autonomous robotics, it is desirable to plan motion in the presence of uncertainty. The Partially Observable Markov Decision Process (POMDP) provides a principled framework for this. Despite the tremendous advances of POMDPbased planning, most can only solve problems with a small and discrete set of actions. This paper presents General Pattern Search in Adaptive Belief Tree (GPSABT), an approximate and online POMDP solver for problems with continuous action spaces. Generalized Pattern Search (GPS) is used as a search strategy for action selection. Under certain conditions, GPSABT converges to the optimal solution in probability. Results on a box pushing and an extended Tag benchmark problem are promising.
Safe Motion Planning for Imprecise Robotic Manipulators by Minimizing Probability of Collision
"... Abstract Robotic manipulators designed for home assistance and new surgical procedures often have significant uncertainty in their actuation due to compliance requirements, cost constraints, and size limits. We introduce a new integrated motion planning and control algorithm for robotic manipulato ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract Robotic manipulators designed for home assistance and new surgical procedures often have significant uncertainty in their actuation due to compliance requirements, cost constraints, and size limits. We introduce a new integrated motion planning and control algorithm for robotic manipulators that makes safety a priority by explicitly considering the probability of unwanted collisions. We first present a fast method for estimating the probability of collision of a motion plan for a robotic manipulator under the assumptions of Gaussian motion and sensing uncertainty. Our approach quickly computes distances to obstacles in the workspace and appropriately transforms this information into the configuration space using a Newton method to estimate the most relevant collision points in configuration space. We then present a samplingbased motion planner based on executing multiple independent rapidly exploring random trees that returns a plan that, under reasonable assumptions, asymptotically converges to a plan that minimizes the estimated collision probability. We demonstrate the speed and safety of our plans in simulation for (1) a 3D manipulator with 6 DOF, and (2) a concentric tube robot, a tentaclelike robot designed for surgical applications. 1
ESTIMATION AND CONTROL OF ROBOTIC SYSTEMS UNDER UNCERTAINTY
, 2014
"... This dissertation addresses the problem of stochastic optimal control with imperfect measurements. The main application of interest is robot motion planning under uncertainty. In the presence of process uncertainty and imperfect measurements, the system’s state is unknown and a state estimation mod ..."
Abstract
 Add to MetaCart
(Show Context)
This dissertation addresses the problem of stochastic optimal control with imperfect measurements. The main application of interest is robot motion planning under uncertainty. In the presence of process uncertainty and imperfect measurements, the system’s state is unknown and a state estimation module is required to provide the informationstate (belief), which is the probability distribution function (pdf) over all possible states. Accordingly, successful robot operation in such a setting requires reasoning about the evolution of informationstate and its quality in future time steps. In its most general form, this is modeled as a PartiallyObservable Markov Decision Process (POMDP) problem. Unfortunately, however, the exact solution of this problem over continuous spaces in the presence of constraints is computationally intractable. Correspondingly, stateoftheart methods that provide approximate solutions are limited to problems with short horizons and small domains. The main challenge for these problems is the exponential growth of the search tree in the information space, as well as the dependency of the entire search tree on the initial
1Extending the Applicability of POMDP Solutions to Robotic Tasks
"... (POMDPs) are used in many robotic task classes from soccer to household chores. Determining an approximately optimal action policy for POMDPs is PSPACEcomplete, and the exponential growth of computation time prohibits solving large tasks. This paper describes two techniques to extend the range of r ..."
Abstract
 Add to MetaCart
(Show Context)
(POMDPs) are used in many robotic task classes from soccer to household chores. Determining an approximately optimal action policy for POMDPs is PSPACEcomplete, and the exponential growth of computation time prohibits solving large tasks. This paper describes two techniques to extend the range of robotic tasks that can be solved using a POMDP. Our first technique reduces the motion constraints of a robot, and then uses stateoftheart robotic motion planning techniques to respect the true motion constraints at runtime. We then propose a novel task decomposition that can be applied to some indoor robotic tasks. This decomposition transforms a long time horizon task into a set of shorter tasks. We empirically demonstrate the performance gain provided by these two techniques through simulated execution in a variety of environments. Comparing a direct formulation of a POMDP to solving our proposed reductions, we conclude that the techniques proposed in this paper can provide significant enhancement to current POMDP solution techniques, extending the POMDP instances that can be solved to include large, continuousstate robotic tasks.