Results 1 -
7 of
7
Urban localization with camera and inertial measurement unit
- in IEEE Intelligent Vehicles Symposium, Gold
, 2013
"... Abstract—Next generation driver assistance systems require precise self localization. Common approaches using global navigation satellite systems (GNSSs) suffer from multipath and shadowing effects often rendering this solution insufficient. In urban environments this problem becomes even more pro-n ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Next generation driver assistance systems require precise self localization. Common approaches using global navigation satellite systems (GNSSs) suffer from multipath and shadowing effects often rendering this solution insufficient. In urban environments this problem becomes even more pro-nounced. Herein we present a system for six degrees of freedom (DOF) ego localization using a mono camera and an inertial measure-ment unit (IMU). The camera image is processed to yield a rough position estimate using a previously computed landmark map. Thereafter IMU measurements are fused with the position estimate for a refined localization update. Moreover, we present the mapping pipeline required for the creation of landmark maps. Finally, we present experiments on real world data. The accu-racy of the system is evaluated by computing two independent ego positions of the same trajectory from two distinct cameras and investigating these estimates for consistency. A mean localization accuracy of 10 cm is achieved on a 10 km sequence in an inner city scenario. I.
Learning Visual Feature Descriptors for Dynamic Lighting Conditions
"... Abstract—In many robotic applications, especially long-term outdoor deployments, the success or failure of feature-based image registration is largely determined by changes in lighting. This paper reports on a method to learn visual feature point descriptors that are more robust to changes in scene ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
(Show Context)
Abstract—In many robotic applications, especially long-term outdoor deployments, the success or failure of feature-based image registration is largely determined by changes in lighting. This paper reports on a method to learn visual feature point descriptors that are more robust to changes in scene lighting than standard hand-designed features. We demonstrate that, by tracking feature points in time-lapse videos, one can easily generate training data that captures how the visual appearance of interest points changes with lighting over time. This training data is used to learn feature descriptors that map the image patches associated with feature points to a lower-dimensional feature space where Euclidean distance provides good discrimination between matching and non-matching image patches. Results showing that the learned descriptors increase the ability to register images under varying lighting conditions are presented for a challenging indoor-outdoor dataset spanning 27 mapping sessions over a period of 15 months, containing a wide variety of lighting changes. I.
Scene Signatures: Localised and Point-less Features for Localisation
"... Abstract—This paper is about localising across extreme light-ing and weather conditions. We depart from the traditional point-feature-based approach since matching under dramatic appearance changes is a brittle and hard. Point-feature detectors are rigid procedures which pass over an image examining ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
Abstract—This paper is about localising across extreme light-ing and weather conditions. We depart from the traditional point-feature-based approach since matching under dramatic appearance changes is a brittle and hard. Point-feature detectors are rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the same criteria to all images of all places. This paper takes a contrary view and asks what is possible if instead we learn a bespoke detector for every place. Our localisation task then turns into curating a large bank of spatially indexed detectors and we show that this yields vastly superior performance in terms of robustness in exchange for a reduced but tolerable metric precision. We present an unsupervised system that produces broad-region detectors for distinctive visual elements, called scene signatures, which can be associated across almost all appearance changes. We show, using 21 km of data collected over a period of 3 months, that our system is capable of producing metric estimates from night-to-day or summer-to-winter conditions. I.
Vision Only Localization
"... Abstract—Autonomous and intelligent vehicles will undoubt-edly depend on an accurate ego localization solution. Global navigation satellite systems (GNSS) suffer from multipath prop-agation rendering this solution insufficient. Herein we present a real time system for six degrees of free-dom (DOF) e ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Autonomous and intelligent vehicles will undoubt-edly depend on an accurate ego localization solution. Global navigation satellite systems (GNSS) suffer from multipath prop-agation rendering this solution insufficient. Herein we present a real time system for six degrees of free-dom (DOF) ego localization that uses only a single monocular camera. The camera image is harnessed to yield an ego pose relative to a previously computed visual map. We describe a process to automatically extract the ingredients of this map from stereoscopic image sequences. These include a mapping trajectory relative to the first pose, global scene signatures and local landmark descriptors. The localization algorithm then consists of a topological localization step that completely obviates the need for any global positioning sensors like GNSS. A metric refinement step that recovers an accurate metric pose is subsequently applied. Metric localization recovers the ego pose in a factor graph optimization process based on local landmarks. We demonstrate a centimeter level accuracy by a set of exper-iments in an urban environment. To this end, two localization estimates are computed for two independent cameras mounted on the same vehicle. These two independent trajectories are there-after compared for consistency. Finally, we present qualitative experiments of an augmented reality (AR) system that depends on the aforementioned localization solution. Several screen shots of the AR system are shown confirming centimeter level accuracy and sub degree angular precision. Index Terms—camera, localization, GPS, landmark, bundle adjustment, nonlinear least squares, SLAM
Bidirectional Loop Closure Detection on Panoramas for Visual Navigation
"... Abstract — Visual loop closure detection plays a key role in navigation systems for intelligent vehicles. Nowadays, state-of-the-art algorithms are focused on unidirectional loop closures, but there are situations where they are not sufficient for identifying previously visited places. Therefore, th ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract — Visual loop closure detection plays a key role in navigation systems for intelligent vehicles. Nowadays, state-of-the-art algorithms are focused on unidirectional loop closures, but there are situations where they are not sufficient for identifying previously visited places. Therefore, the detection of bidirectional loop closures when a place is revisited in a different direction provides a more robust visual navigation. We propose a novel approach for identifying bidirectional loop closures on panoramic image sequences. Our proposal combines global binary descriptors and a matching strategy based on cross-correlation of sub-panoramas, which are defined as the different parts of a panorama. A set of experiments considering several binary descriptors (ORB, BRISK, FREAK, LDB) is provided, where LDB excels as the most suitable. The proposed matching proffers a reliable bidirectional loop closure detection, which is not efficiently solved in any other previous research. Our method is successfully validated and compared
Long-Term Simultaneous Localization and Mapping in Dynamic Environments
, 2015
"... This thesis was made possible by: least squares; my wife Amy and my parents Chris and ..."
Abstract
- Add to MetaCart
(Show Context)
This thesis was made possible by: least squares; my wife Amy and my parents Chris and