Results 1 -
4 of
4
Hri in the sky: Creating and commanding teams of uavs with a vision-mediated gestural interface
- In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems (IROS’13
, 2013
"... Abstract — Extending our previous work in real-time visionbased Human Robot Interaction (HRI) with multi-robot systems, we present the first example of creating, modifying and commanding teams of UAVs by an uninstrumented human. To create a team the user focuses attention on an individual robot by s ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract — Extending our previous work in real-time visionbased Human Robot Interaction (HRI) with multi-robot systems, we present the first example of creating, modifying and commanding teams of UAVs by an uninstrumented human. To create a team the user focuses attention on an individual robot by simply looking at it, then adds or removes it from the current team with a motion-based hand gesture. Another gesture commands the entire team to begin task execution. Robots communicate among themselves by wireless network to ensure that no more than one robot is focused, and so that the whole team agrees that it has been commanded. Since robots can be added and removed from the team, the system is robust to incorrect additions. A series of trials with two and three very low-cost UAVs and off-board processing demonstrates the practicality of our approach. I.
HRI in the Sky: Creating and Commanding Teams of UAVs with a Vision-mediated Gestural Interface
"... Abstract-Extending our previous work in real-time visionbased Human Robot Interaction (HRI) with multi-robot systems, we present the first example of creating, modifying and commanding teams of UAVs by an uninstrumented human. To create a team the user focuses attention on an individual robot by si ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract-Extending our previous work in real-time visionbased Human Robot Interaction (HRI) with multi-robot systems, we present the first example of creating, modifying and commanding teams of UAVs by an uninstrumented human. To create a team the user focuses attention on an individual robot by simply looking at it, then adds or removes it from the current team with a motion-based hand gesture. Another gesture commands the entire team to begin task execution. Robots communicate among themselves by wireless network to ensure that no more than one robot is focused, and so that the whole team agrees that it has been commanded. Since robots can be added and removed from the team, the system is robust to incorrect additions. A series of trials with two and three very low-cost UAVs and off-board processing demonstrates the practicality of our approach.
Guidance: A Visual Sensing Platform For Robotic Applications
"... Visual sensing, such as vision based localization, nav-igation, tracking, are crucial for intelligent robots, which have shown great advantage in many robotic applications. However, the market is still in lack of a powerful visual sensing platform to deal with most of the visual processing tasks. In ..."
Abstract
- Add to MetaCart
(Show Context)
Visual sensing, such as vision based localization, nav-igation, tracking, are crucial for intelligent robots, which have shown great advantage in many robotic applications. However, the market is still in lack of a powerful visual sensing platform to deal with most of the visual processing tasks. In this paper we introduce a powerful and efficient platform, Guidance, which is composed of one processor and multiple (up to five) stereo sensing units. Basic visual tasks including visual odometry, obstacle avoidance, depth generation, are given as built-in functions. Additionally, with the aid of a well documented SDK, Guidance is extremely flexible for users to develop other applications, such as autonomous navigation, SLAM, tracking. 1.
Interactive Person Following and Gesture Recognition with a Flying Robot
"... Abstract — Gesture recognition and person following play a vital role in social robotics. In this paper, we present an approach that allows a quadrocopter to follow a person and to recognize simple gestures using an onboard depth camera. This enables novel applications such as hands-free video recor ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — Gesture recognition and person following play a vital role in social robotics. In this paper, we present an approach that allows a quadrocopter to follow a person and to recognize simple gestures using an onboard depth camera. This enables novel applications such as hands-free video recording and picture taking. Moving platforms with an onboard camera make the problem of tracking a person highly challenging. To overcome this problem, we stabilize the depth image by warping it to a virtual-static camera, using the estimated pose of the quadrocopter obtained from vision and inertial sensors using an Extended Kalman filter. The stabilized depth video can be used with state of the art motion capture solutions such as the OpenNI tracker. It allows us to obtain the full body pose. The pose can then for example be used to recognize simple gestures to control the quadrocopter’s behaviour. Our approach recognizes a small set of example commands (“follow me”, “take picture”, “land”), and generate corresponding motion commands. We demonstrate the practical performance of our approach in an extensive set of experiments with a quadrocopter. I.