Results 1  10
of
38
Monte Carlo Localization: Efficient Position Estimation for Mobile Robots
 IN PROC. OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI
, 1999
"... This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computational ..."
Abstract

Cited by 277 (51 self)
 Add to MetaCart
This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as gridbased approaches that represent the state space by highresolution 3D grids), or had to resort to extremely coarsegrained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies samplingbased methods for approximating probability distributions, in a way that places computation " where needed." The number of samples is adapted online, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement...
Tractable inference for complex stochastic processes
 In Proc. UAI
, 1998
"... The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief state—a probability distribution over the state of the process at a gi ..."
Abstract

Cited by 263 (13 self)
 Add to MetaCart
The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief state—a probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy. 1
An Online Mapping Algorithm for Teams of Mobile Robots
 International Journal of Robotics Research
, 2001
"... We propose a new probabilistic algorithm for online mapping of unknown environments with teams of robots. At the core of the algorithm is a technique that combines fast maximum likelihood map growing with a Monte Carlo localizer that uses particle representations. The combination of both yields an o ..."
Abstract

Cited by 190 (14 self)
 Add to MetaCart
We propose a new probabilistic algorithm for online mapping of unknown environments with teams of robots. At the core of the algorithm is a technique that combines fast maximum likelihood map growing with a Monte Carlo localizer that uses particle representations. The combination of both yields an online algorithm that can cope with large odometric errors typically found when mapping an environment with cycles. The algorithm can be implemented distributedly on multiple robot platforms, enabling a team of robots to cooperatively generate a single map of their environment. Finally, an extension is described for acquiring threedimensional maps, which capture the structure and visual appearance of indoor environments in 3D.
A Probabilistic Approach to Collaborative MultiRobot Localization
, 2000
"... This paper presents a statistical algorithm for collaborative mobile robot localization. Our approach uses a samplebased version of Markov localization, capable of localizing mobile robots in an anytime fashion. When teams of robots localize themselves in the same environment, probabilistic method ..."
Abstract

Cited by 177 (18 self)
 Add to MetaCart
This paper presents a statistical algorithm for collaborative mobile robot localization. Our approach uses a samplebased version of Markov localization, capable of localizing mobile robots in an anytime fashion. When teams of robots localize themselves in the same environment, probabilistic methods are employed to synchronize each robot's belief whenever one robot detects another. As a result, the robots localize themselves faster, maintain higher accuracy, and highcost sensors are amortized across multiple robot platforms. The technique has been implemented and tested using two mobile robots equipped with cameras and laser rangefinders for detecting other robots. The results, obtained with the real robots and in series of simulation runs, illustrate drastic improvements in localization speed and accuracy when compared to conventional singlerobot localization. A further experiment demonstrates that under certain conditions, successful localization is only possible if teams of heterogeneous robots collaborate during localization.
Bayesian Map Learning in Dynamic Environments
 In Neural Info. Proc. Systems (NIPS
"... We show how map learning can be formulated as inference in a graphical model, which allows us to handle changing environments in a natural manner. We describe several different approximation schemes for the problem, and illustrate some results on a simulated gridworld with doors that can open a ..."
Abstract

Cited by 134 (3 self)
 Add to MetaCart
We show how map learning can be formulated as inference in a graphical model, which allows us to handle changing environments in a natural manner. We describe several different approximation schemes for the problem, and illustrate some results on a simulated gridworld with doors that can open and close. We close by briefly discussing how to learn more general models of (partially observed) environments, which can contain a variable number of objects with changing internal state. 1 Introduction Mobile robots need to navigate in dynamic environments: on a short time scale, obstacles, such as people, can appear and disappear, and on longer time scales, structural changes, such as doors opening and closing, can occur. In this paper, we consider how to create models of dynamic environments. In particular, we are interested in modeling the location of objects, which we can represent using a map. This enables the robot to perform path planning, etc. We propose a Bayesian approach in ...
Adapting the Sample Size in Particle Filters Through KLDSampling
 International Journal of Robotics Research
, 2003
"... Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process. ..."
Abstract

Cited by 97 (8 self)
 Add to MetaCart
Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process.
Particle Filters for Mobile Robot Localization
, 2001
"... This article describes a family of methods, known as Monte Carlo localization (MCL) (Dellaert at al. 1999b, Fox et al. 1999b). The MCL algorithm is a particle filter combined with probabilistic models of robot perception and motion. Building on this, we will describe a variation of MCL which uses a ..."
Abstract

Cited by 93 (18 self)
 Add to MetaCart
This article describes a family of methods, known as Monte Carlo localization (MCL) (Dellaert at al. 1999b, Fox et al. 1999b). The MCL algorithm is a particle filter combined with probabilistic models of robot perception and motion. Building on this, we will describe a variation of MCL which uses a different proposal distribution (a mixture distribution) that facilitates fast recovery from global localization failures. As we will see, this proposal distribution has a range of advantages over that used in standard MCL, but it comes at the price that it is more difficult to implement, and it requires an algorithm for sampling poses from sensor measurements, which might be difficult to obtain. Finally, we will present an extension of MCL to cooperative multirobot localization of robots that can perceive each other during localization. All these approaches have been tested thoroughly in practice. Experimental results are provided to demonstrate their relative strengths and weaknesses in practical robot applications.
Model based Bayesian Exploration
 In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence
, 1999
"... Reinforcement learning systems are often concerned with balancing exploration of untested actions against exploitation of actions that are known to be good. The benefit of exploration can be estimated using the classical notion of Value of Information  the expected improvement in future deci ..."
Abstract

Cited by 89 (0 self)
 Add to MetaCart
Reinforcement learning systems are often concerned with balancing exploration of untested actions against exploitation of actions that are known to be good. The benefit of exploration can be estimated using the classical notion of Value of Information  the expected improvement in future decision quality arising from the information acquired by exploration. Estimating this quantity requires an assessment of the agent's uncertainty about its current value estimates for states. In this paper we investigate ways to represent and reason about this uncertainty in algorithms where the system attempts to learn a model of its environment. We explicitly represent uncertainty about the parameters of the model and build probability distributions over Qvalues based on these. These distributions are used to compute a myopic approximation to the value of information for each action and hence to select the action that best balances exploration and exploitation. 1 Introduction Rei...
A general algorithm for approximate inference and its applciation to hybrid bayes nets
 In Uncertainty in Artificial Intelligence (UAI'98
, 1998
"... The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials — distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the ..."
Abstract

Cited by 73 (2 self)
 Add to MetaCart
The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials — distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the clique potentials. This paper presents a new unified approach that combines approximate inference and the clique tree algorithm, thereby circumventing this limitation. Many known approximate inference algorithms can be viewed as instances of this approach. The algorithm essentially does clique tree propagation, using approximate inference to estimate the densities in each clique. In many settings, the computation of the approximate clique potential can be done easily using statistical importance sampling. Iterations are used to gradually improve the quality of the estimation. 1