Results 1 - 10
of
15
Vision-Based Autonomous Mapping and Exploration Using a Quadrotor MAV
"... Abstract — In this paper, we describe our autonomous visionbased quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a frontlooking stereo camera as the main exteroceptive sensor, our quadrotor ..."
Abstract
-
Cited by 26 (3 self)
- Add to MetaCart
(Show Context)
Abstract — In this paper, we describe our autonomous visionbased quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a frontlooking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results. I.
Markerless Visual Control of a Quad-Rotor Micro Aerial Vehicle by
- Means of On-Board Stereo Processing”, in Autonomous Mobile System Conference (AMS
, 2012
"... Abstract. We present a quad-rotor micro aerial vehicle (MAV) that is capable to fly and navigate autonomously in an unknown environment. The only sensory in-put used by the MAV are the imagery from two cameras in a stereo configuration, and data from an inertial measurement unit. We apply a fast spa ..."
Abstract
-
Cited by 8 (8 self)
- Add to MetaCart
Abstract. We present a quad-rotor micro aerial vehicle (MAV) that is capable to fly and navigate autonomously in an unknown environment. The only sensory in-put used by the MAV are the imagery from two cameras in a stereo configuration, and data from an inertial measurement unit. We apply a fast sparse stereo match-ing algorithm in combination with a visual odometry method based on PTAM to estimate the current MAV pose, which we require for autonomous control. All processing is performed on a single board computer on-board the MAV. To our knowledge, this is the first MAV that uses stereo vision for navigation, and does not rely on visual markers or off-board processing. In a flight experiment, the MAV was capable to hover autonomously, and it was able to estimate its current position at a rate of 29 Hz and with an average error of only 2.8cm. 1
On-Board Dual-Stereo-Vision for the Navigation of an Autonomous MAV
"... (MAV) equipped with four cameras, which are arranged in two stereo configurations. The MAV is able to perform stereo matching for each camera pair on-board and in real-time, using an efficient sparse stereo method. In case of the camera pair that is facing forward, the stereo matching results are us ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
(MAV) equipped with four cameras, which are arranged in two stereo configurations. The MAV is able to perform stereo matching for each camera pair on-board and in real-time, using an efficient sparse stereo method. In case of the camera pair that is facing forward, the stereo matching results are used for a reduced stereo SLAM system. The other camera pair, which is facing downwards, is used for ground plane detection and tracking. Hence, we are able to obtain a full 6DoF pose estimate from each camera pair, which we fuse with inertial measurements in an extended Kalman filter. Special care is taken to compensate various drift errors. In an evaluation we show that using two instead of one camera pair significantly increases the pose estimation accuracy and robustness. 1
Real-time photorealistic 3D mapping for micro aerial vehicles
- in Proc. IEEE Int. Conf. Robotics Intelligent System, 2011
"... framework for micro aerial vehicles (MAVs). RGBD images are generated from either stereo or structured light cameras, and fed into the processing pipeline. A visual odometry algorithm runs on-board the MAV. We improve the computational per-formance of the visual odometry by using the IMU readings to ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
framework for micro aerial vehicles (MAVs). RGBD images are generated from either stereo or structured light cameras, and fed into the processing pipeline. A visual odometry algorithm runs on-board the MAV. We improve the computational per-formance of the visual odometry by using the IMU readings to establish a 1-point RANSAC instead of using the standard 3-point RANSAC to estimate the relative motion between consecutive frames. We use local bundle adjustment to refine the pose estimates. At the same time, the MAV builds a 3D occupancy grid from range data, and transmits this grid together with images and pose estimates over a wireless network to a ground station. We propose a view-dependent projective texture mapping method that is used by the ground station to incrementally build a 3D textured occupancy grid over time. This map is both geometrically accurate and photo-realistic; the map provides real-time visual updates on the ground to a remote operator, and is used for path planning as well. I.
Aerial object tracking from an airborne platform
- In: Proc. of International Conference on Unmanned Aircraft Systems
, 2014
"... Abstract — The integration of drones into the civil airspace is still an unresolved problem. In this paper we present an experimental Sense and Avoid system integrated into an aircraft to detect and track other aerial objects with electro-optical sensors. The system is based on a custom aircraft nos ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Abstract — The integration of drones into the civil airspace is still an unresolved problem. In this paper we present an experimental Sense and Avoid system integrated into an aircraft to detect and track other aerial objects with electro-optical sensors. The system is based on a custom aircraft nose-pod with two integrated cameras and several additional sensors. First test flights were successfully completed where data from artificial collision scenarios executed by two aircraft were recorded. We give an overview of the recorded dataset and show the challenges to be faced with processing videos from a mobile airborne platform in a mountainous area. The proposed tracking framework is based on measurements from multiple detectors fused onto a virtual sphere centered at the aircraft position. To reduce false tracks from ground clutter, clouds or dirt on the lens, a hierarchical multi-layer filter pipeline is applied. The aerial object tracking framework is evaluated on various scenarios from our challenging dataset. We show that aerial objects are successfully detected and tracked at large distances, even in front of terrain. I.
Self-Calibration and Visual SLAM with a Multi-Camera System on a Micro Aerial Vehicle
"... Abstract—The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a nec-essary prerequisite if vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract—The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a nec-essary prerequisite if vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camera system. On our MAV, we set up each camera pair in a stereo configuration. We propose a novel vSLAM-based self-calibration method for a multi-sensor system that includes multiple calibrated stereo cameras and an inertial measurement unit (IMU). Our self-calibration estimates the transform with metric scale between each camera and the IMU. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multi-camera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3-point algorithm that uses inertial information to recover the relative motion of the MAV with metric scale. Our constant-time vSLAM implementation with loop closures runs on-board the MAV in real-time. To the best of our knowledge, no published work has demonstrated real-time on-board vSLAM with loop closures. We show experimental results in both indoor and outdoor environments. The code for both the self-calibration and vSLAM is available as a set of ROS packages at
On-Board Dual-Stereo-Vision for Autonomous Quadrotor Navigation
"... (MAV) capable of autonomous indoor navigation. The MAV is equipped with four cameras arranged in two stereo configurations. One camera pair is facing forward and serves as input for a reduced stereo SLAM system. The other camera pair is facing downwards and is used for ground plane detection and tra ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
(MAV) capable of autonomous indoor navigation. The MAV is equipped with four cameras arranged in two stereo configurations. One camera pair is facing forward and serves as input for a reduced stereo SLAM system. The other camera pair is facing downwards and is used for ground plane detection and tracking. All processing, including sparse stereo matching, is run on-board in real-time and at high processing rates. We demonstrate the capabilities of this MAV design in several flight experiments. Our MAV is able to recover from pose estimation errors and can cope with processing failures for one camera pair. We show that by using two camera pairs instead of one, we are able to significantly increase navigation accuracy and robustness. I.
Reactive Avoidance Using Embedded Stereo Vision for MAV Flight
"... Abstract — High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract — High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertak-ing more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a light-weight, very computationally constrained platform, and demon-strate obstacle avoidance in a variety of environments. I.
TRAJECTORY GENERATION AND CONTROL FOR QUADROTORS
, 2012
"... This thesis presents contributions to the state-of-the-art in quadrotor control, payload transportation with single and multiple quadrotors, and trajectory generation for single and multiple quadrotors. In Ch. 2 we describe a controller capable of handling large roll and pitch angles that enables a ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
This thesis presents contributions to the state-of-the-art in quadrotor control, payload transportation with single and multiple quadrotors, and trajectory generation for single and multiple quadrotors. In Ch. 2 we describe a controller capable of handling large roll and pitch angles that enables a quadrotor to follow trajectories requiring large accelerations and also recover from extreme initial conditions. In Ch. 3 we describe a method that allows teams of quadrotors to work together to carry payloads that they could not carry individually. In Ch. 4 we discuss an online parameter estimation method for quadrotors transporting payloads which enables a quadrotor to use its dynamics in order to learn about the payload it is carrying and also adapt its control law in order to improve tracking performance. In Ch. 5 we present a trajectory generation method that enables quadrotors to fly through narrow gaps at various orientations and perch on inclined surfaces. Chapter 6 discusses a method for generating dynamically optimal trajectories through a series of predefined waypoints and safe corridors and Ch. 7 extends that method to enable heterogeneous quadrotor teams to
1Virtual Rigid Bodies for Agile Coordination of Quadrotor Swarms and Human-Swarm
"... Abstract—This article presents a method for controlling a swarm of quadrotor micro aerial vehicles to perform agile interleaved maneuvers while holding a fixed relative formation, and transitioning between different formations. We propose an abstraction, called a Virtual Rigid Body, which allows us ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—This article presents a method for controlling a swarm of quadrotor micro aerial vehicles to perform agile interleaved maneuvers while holding a fixed relative formation, and transitioning between different formations. We propose an abstraction, called a Virtual Rigid Body, which allows us to decouple the trajectory of the whole swarm from the trajectories of the individual quadrotors within the swarm. The Virtual Rigid Body provides a way to plan and execute complex interleaved trajectories, and also gives a simple, intuitive interface for a single human user to control an arbitrarily large aerial swarm in real time. The Virtual Rigid Body concept is integrated with differential flatness-based feedback control to give a suite of swarm control tools. The article also proposes a library architecture for a human operator to select between a number of pre-determined formations for the swarm in real time. Our methods are demonstrated in hardware experiments with a group of three quadrotors controlled autonomously, and a group of five quadrotors teleoperated by a single human user. I.