Results 1 -
2 of
2
VBASR: The Vision System Vision-Based Autonomous Security Robot
"... ABSTRACT The goal of this project is to develop a computer vision system that enables a robot to navigate the hallways of Bradley University's engineering building using a generic webcam as the only sensor. OpenCV2.0 programmed in C++ is the primary tool used to develop the vision system softw ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT The goal of this project is to develop a computer vision system that enables a robot to navigate the hallways of Bradley University's engineering building using a generic webcam as the only sensor. OpenCV2.0 programmed in C++ is the primary tool used to develop the vision system software. Three algorithms were developed to identify the center of the hallway and guide the robot in the correct direction. The first two algorithms use a generic filter (normal, median, or Gaussian) followed by edge detection and then corner detection on the edgedetected image. The first algorithm identifies the strongest vertical lines on an image. Averaging the horizontal coordinates of the vertical lines indicates the location of the center of the hallway relative to the robot. The second algorithm utilizes the trapezoidal shape of the hallway formed where the floor meets the walls, as seen from the perspective of the robot. The y-coordinates associated with the trapezoid's legs are then compared to estimate robot orientation with respect to the walls. The third algorithm uses color to segment the floor from the rest of the features in the image (walls, ceiling, and obstacles). Once again, the trapezoidal shape appears and the center of the hallway is determined based on the location of the highest y-valued pixels identified as floor pixels. Test data indicates that none of these algorithms is singularly sufficient; however, combining their results they can identify the direction a robot must turn to remain in the center of the hallway with 96.6% accuracy. Furthermore, leveraging the results of multiple algorithms produces more robust navigation, where one algorithm covers over the shortcomings of another. The vision system architecture is designed to execute algorithms in parallel. Such a structure enables the addition and removal of algorithms without adversely affecting the system as a whole. Further algorithms may be developed and easily added to improve navigation. Additionally, the system may intelligently ignore results from algorithms that are recognized as inappropriate for certain situations.
Navigation using a spherical camera Raman Arora University Of Wisconsin-Madison
"... A novel group theoretical method is proposed for autonomous navigation based on a spherical image camera. The environment of a robot is captured on a sphere. The three dimensional scenes at two different points in the space are related by a transformation from the special Euclidean motion group whic ..."
Abstract
- Add to MetaCart
(Show Context)
A novel group theoretical method is proposed for autonomous navigation based on a spherical image camera. The environment of a robot is captured on a sphere. The three dimensional scenes at two different points in the space are related by a transformation from the special Euclidean motion group which is the semi-direct product of the rotation and the translation groups. The motion of the robot is recovered by iteratively estimating the rotation and the translation in an Expectation-Maximization fashion. 1