Results 1 - 10
of
41
Progress towards multi-robot reconnaissance and the MAGIC 2010 COMPETITION
- JOURNAL OF FIELD ROBOTICS
, 2012
"... Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical ch ..."
Abstract
-
Cited by 12 (5 self)
- Add to MetaCart
Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, designed to perform urban reconnaissance missions, that won the MAGIC 2010 competition. This paper describes a variety of autonomous systems which require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. In this paper, we will describe technical contributions throughout our system that played a significant role in the performance of our system. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.
Plane Registration Leveraged by Global Constraints for Context-Aware AEC Applications. Computer-Aided Civil and Infrastructure Engineering
, 2012
"... Abstract: In this article, we propose a new registration algorithm and computing framework, the KEG tracker, for estimating a camera’s position and orientation for a general class of mobile context-aware applications in Architecture, Engineering, and Construction (AEC). By studying two classic natur ..."
Abstract
-
Cited by 7 (4 self)
- Add to MetaCart
(Show Context)
Abstract: In this article, we propose a new registration algorithm and computing framework, the KEG tracker, for estimating a camera’s position and orientation for a general class of mobile context-aware applications in Architecture, Engineering, and Construction (AEC). By studying two classic natural marker-based reg-istration algorithms, Homography-from-detection and Homography-from-tracking, and by overcoming their specific limitations of jitter and drift, our method applies two global constraints (geometric and appearance) to prevent tracking errors from propagating between con-secutive frames. The proposed method is able to achieve an increase in both stability and accuracy, while being fast enough for real-time applications. Experiments on both synthesized and real-world test cases demonstrate that our method is superior to existing state-of-the-art registration algorithms. The article also explores several AEC applications of our method in context-aware com-puting and desktop-augmented reality. 1
Active Sensing for Dynamic, Non-holonomic, Robust Visual Servoing
"... Abstract — We consider the problem of visually servoing a legged vehicle with unicycle-like nonholonomic constraints sub-ject to second-order fore-aft dynamics in its horizontal-plane. We target applications to rugged environments characterized by complex terrain likely to significantly perturb the ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
Abstract — We consider the problem of visually servoing a legged vehicle with unicycle-like nonholonomic constraints sub-ject to second-order fore-aft dynamics in its horizontal-plane. We target applications to rugged environments characterized by complex terrain likely to significantly perturb the robot’s nominal dynamics. At the same time, it is crucial that the cam-era avoid “obstacle ” poses where absolute localization would be compromised by even partial loss of landmark visibility. Hence, we seek a controller whose robustness against disturbances and obstacle avoidance capabilities can be assured by a strict global Lyapunov function. Since the nonholonomic constraints preclude smooth point stabilizability we introduce an extra degree of sensory freedom, affixing the camera to an actuated panning axis on the robot’s back. Smooth stabilizability to the robot-orientation-indifferent goal cycle no longer precluded, we construct a controller and strict global Lyapunov function with the desired properties. We implement several versions of the scheme on a RHex robot maneuvering over slippery ground and document its successful empirical performance. I.
Inferring Maps and Behaviors from Natural Language Instructions
"... Abstract. Natural language provides a flexible, intuitive way for people to command robots, which is becoming increasingly important as robots transition to working alongside people in our homes and workplaces. To follow instructions in unknown environments, robots will be expected to reason about p ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Natural language provides a flexible, intuitive way for people to command robots, which is becoming increasingly important as robots transition to working alongside people in our homes and workplaces. To follow instructions in unknown environments, robots will be expected to reason about parts of the environments that were described in the in-struction, but that the robot has no direct knowledge about. This paper proposes a probabilistic framework that enables robots to follow com-mands given in natural language, without any prior knowledge of the environment. The novelty lies in exploiting environment information im-plicit in the instruction, thereby treating language as a type of sensor which is used to formulate a prior distribution over the unknown parts of the environment. The algorithm then uses this learned distribution to infer a sequence of actions that are most consistent with the command, updating our belief as we gather more metric information. We evaluate our approach through simulation as well as experiments on two mobile robots; our results demonstrate the algorithm’s ability to follow naviga-tion commands with performance comparable to that of a fully-known environment. 1
AprilCal: Assisted and repeatable camera calibration
"... Abstract — Reliable and accurate camera calibration usually requires an expert intuition to reliably constrain all of the parameters in the camera model. Existing toolboxes ask users to capture images of a calibration target in positions of their choosing, after which the maximum-likelihood calibrat ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Abstract — Reliable and accurate camera calibration usually requires an expert intuition to reliably constrain all of the parameters in the camera model. Existing toolboxes ask users to capture images of a calibration target in positions of their choosing, after which the maximum-likelihood calibration is computed using all images in a batch optimization. We introduce a new interactive methodology that uses the current calibration state to suggest the position of the target in the next image and to verify that the final model parameters meet the accuracy requirements specified by the user. Suggesting target positions relies on the ability to score candidate suggestions and their effect on the calibration. We describe two methods for scoring target positions: one that computes the stability of the focal length estimates for initializing the calibration, and another that subsequently quantifies the model uncertainty in pixel space. We demonstrate that our resulting system, AprilCal, consistently yields more accurate camera calibrations than standard tools using results from a set of human trials. We also demonstrate that our approach is applicable for a variety of lenses. I.
Learning Convolutional Filters for Interest Point Detection
"... Abstract — We present a method for learning efficient feature detectors based on in-situ evaluation as an alternative to handengineered feature detection methods. We demonstrate our in-situ learning approach by developing a feature detector optimized for stereo visual odometry. Our feature detector ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Abstract — We present a method for learning efficient feature detectors based on in-situ evaluation as an alternative to handengineered feature detection methods. We demonstrate our in-situ learning approach by developing a feature detector optimized for stereo visual odometry. Our feature detector parameterization is that of a convolutional filter. We show that feature detectors competitive with the best hand-designed alternatives can be learned by random sampling in the space of convolutional filters and we provide a way to bias the search toward regions of the search space that produce effective results. Further, we describe our approach for obtaining the ground-truth data needed by our learning system in real, everyday environments. I.
Teachless teach-repeat: Toward Vision-Based Programming of Industrial Robots
- in "IEEE International Conference on Robotics and Automation", St
, 2012
"... Abstract — Modern programming of industrial robots is often based on the teach-repeat paradigm: a human operator places the robot in many key positions, for teaching its task. Then the robot can repeat a path defined by these key positions. This paper proposes a vision-based approach for the automat ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract — Modern programming of industrial robots is often based on the teach-repeat paradigm: a human operator places the robot in many key positions, for teaching its task. Then the robot can repeat a path defined by these key positions. This paper proposes a vision-based approach for the automation of the teach stage. The approach relies on a constant auto-calibration of the system. Therefore, the only requirement is a precise geometrical description of the part to process. The realism of the approach is demonstrated through the emulation of a glue application process with an industrial robot. Results in terms of precision are very promising. I.
RISE: An Incremental Trust-Region Method for Robust Online Sparse Least-Squares Estimation
, 2014
"... Many point estimation problems in robotics, computer vision and machine learning can be formulated as instances of the general problem of minimizing a sparse nonlinear sum-of-squares objective function. For inference problems of this type, each input datum gives rise to a summand in the objective fu ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
Many point estimation problems in robotics, computer vision and machine learning can be formulated as instances of the general problem of minimizing a sparse nonlinear sum-of-squares objective function. For inference problems of this type, each input datum gives rise to a summand in the objective function, and therefore performing online inference corresponds to solving a sequence of sparse nonlinear least-squares minimization problems in which additional summands are added to the objective function over time. In this paper we present Robust Incremental least-Squares Estimation (RISE), an incrementalized version of the Powell’s Dog-Leg numerical optimization method suitable for use in online sequential sparse least-squares minimization. As a trust-region method, RISE is naturally robust to objective function nonlinearity and numerical ill-conditioning, and is provably globally convergent for a broad class of inferential cost functions (twice-continuously differentiable functions with bounded sublevel sets). Consequently, RISE maintains the speed of current state-of-the-art online sparse least-squares methods while providing superior reliability.
Towards Wide-angle Micro Vision Sensors
, 2013
"... Achieving computer vision on micro-scale devices is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix manipulations, convolution, etc.) to be difficult. This paper proposes and analyzes a class of miniature vision sensors ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Achieving computer vision on micro-scale devices is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix manipulations, convolution, etc.) to be difficult. This paper proposes and analyzes a class of miniature vision sensors that can help overcome these constraints. These sensors reduce power requirements through template-based optical convolution, and they enable a wide field-of-view within a small form through a refractive optical design. We describe the trade-offs between the field of view, volume, and mass of these sensors and we provide analytic tools to navigate the design space. We demonstrate milli-scale prototypes for computer vision tasks such as locating edges, tracking targets, and detecting faces. Finally, we utilize photolithographic fabrication tools to further miniaturize the optical designs and demonstrate fiducial detection onboard a small autonomous air vehicle.
A high-performance MAV for autonomous navigation in complex 3D environments
- in Proc. of the Int. Conf. on Unmanned Aircraft Systems (ICUAS
, 2015
"... Abstract—Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous monitor-ing, inspection, and surveillance of buildings, e.g., for maintenance in industrial plants. Key prerequisites for the fully autonomous operation of micro aerial vehicles in complex 3D environme ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Micro aerial vehicles, such as multirotors, are particular well suited for the autonomous monitor-ing, inspection, and surveillance of buildings, e.g., for maintenance in industrial plants. Key prerequisites for the fully autonomous operation of micro aerial vehicles in complex 3D environments include real-time state estimation, obstacle detection, mapping, and navigation planning. In this paper, we describe an integrated system with a multimodal sensor setup for omnidirectional environment perception and 6D state estimation. Our MAV is equipped with a variety of sensors including a dual 3D laser scanner, three stereo camera pairs, an IMU and a powerful onboard computer to achieve these tasks in real-time. Our experimental evaluation demonstrates the performance of the integrated system. I.