Results 1 
7 of
7
Automatic Sensor Placement for ModelBased Robot Vision
, 2004
"... This paper presents a method for automatic sensor placement for modelbased robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple 3D images to be taken from different vantage vie ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
This paper presents a method for automatic sensor placement for modelbased robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple 3D images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a minmax criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method.
Optimal Strategies to Track and Capture a Predictable Target
 IN PROC. IEEE INT. CONF. ON ROBOTICS & AUTOMATION
, 2003
"... We present an O(n log for computing the optimal robot motion that maintains lineof sight visibility between a target moving inside a polygon with n vertices which may contain holes. The motion is optimal for the tracking robot (the observer) in the sense that the target either remains visible fo ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We present an O(n log for computing the optimal robot motion that maintains lineof sight visibility between a target moving inside a polygon with n vertices which may contain holes. The motion is optimal for the tracking robot (the observer) in the sense that the target either remains visible for the longest possible time, or it is captured by the observer in the minimum time when feasible. Thus, the algorithm maximizes the minimum timetoescape. Our algorithm assumes that the target moves along a known path. Thus, it is an offline algorithm. Our theoretical results for the algorithm's runtime assume that the target is moving along a shortest path from its source to its destination. This assumption, however is not required to prove the optimality of the computed solution, hence the algorithm remains correct for the general case.
Optimal motion strategies to track and capture a predictable target
 In IEEE Conference of Robotics and Automation (ICRA
, 2003
"... AbstractWe present an O(n log1+ " n)time algorithm for computing the optimal mobile robot motion strategy that maintains lineofsight visibility of a moving target inside a polygonal region with n vertices which may contain holes. The algorithm is optimal for the tracking robot (the observe ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
AbstractWe present an O(n log1+ " n)time algorithm for computing the optimal mobile robot motion strategy that maintains lineofsight visibility of a moving target inside a polygonal region with n vertices which may contain holes. The algorithm is optimal for the tracking robot (the observer) in the sense that the computed path will ensure that the target remains visible for the longest possible time, or succeeds in capturing the target in the minimum time when the target cannot escape the observer's visibility. Thus, the algorithm maximizes the minimum timetoescape. Our algorithm assumes that the target moves along a know path. Thus, it is an o®line algorithm. Our theoretical results for the algorithm's runtime assume that the target is moving along a shortest path from its source to its destination. This assumption, however is not required to prove the optimality of the computed solution, hence the algorithm remains correct for the general case.
OcclusionFree Path Planning with a Probabilistic Roadmap
"... Abstract — We present a novel algorithm for path planning that avoids occlusions of a visual target for an “eyeinhand” sensor on an articulated robot arm. We compute paths using a probabilistic roadmap to avoid collisions between the robot and obstacles, while penalizing trajectories that do not m ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We present a novel algorithm for path planning that avoids occlusions of a visual target for an “eyeinhand” sensor on an articulated robot arm. We compute paths using a probabilistic roadmap to avoid collisions between the robot and obstacles, while penalizing trajectories that do not maintain lineofsight. The system determines the space from which lineofsight is unimpeded to the target (the visible region) using the method described in [11]. We assign penalties to trajectories within the roadmap proportional to the distance the camera travels while outside the visible region. Using Dijkstra’s algorithm, we compute paths of minimal occlusion (maximal visibility) through the roadmap. In our experiments, we compare a shortestdistance path to the minimalocclusion path and discuss the impact of the improved visibility. A sensor target visible target obstacle (bin) line of sight blocked B I.
Y.J.: Visibility planning: Predicting continuous period of unobstructed views
, 2004
"... Abstract. To perform surveillance tasks effectively, unobstructed views of objects are required e.g. unobstructed video of objects are often needed for gait recognition. As a result, we need to determine intervals for video collection during which a desired object is visible w.r.t. a given sensor. I ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. To perform surveillance tasks effectively, unobstructed views of objects are required e.g. unobstructed video of objects are often needed for gait recognition. As a result, we need to determine intervals for video collection during which a desired object is visible w.r.t. a given sensor. In addition, these intervals are in the future so that the system can effectively plan and schedule sensors for collecting these videos. We describe an approach to determine these visibility intervals. A Kalman filter is first used to predict the trajectories of the objects. The trajectories are converted to polar coordinate representations w.r.t. a given sensor. Trajectories with the same angular displacement w.r.t. the sensor over time can be found by determining intersection points of functions representing these trajectories. Intervals between these intersection points are suitable for video collection. We also address the efficiency issue of finding these intersection points. An obvious brute force approach of O(N2) exists, where N is the number of objects. This approach suffices when N is small. When N is large, we introduce an optimal segment intersection algorithm of O(N log2 N + I), I being the number of intersection points. Finally, we model the prediction errors associated with the Kalman filter using a circular object representation. Experimental results that compare the performance of the brute force and the optimal segment intersection algorithms are shown.
Sensor, motion and temporal planning
, 2006
"... We describe in this dissertation, planning strategies which enhance the accuracy with which visual surveillance can be conducted and which expand the capabilities of visual surveillance systems. Several classes of planning strategies are considered: sensor planning, motion planning and temporal plan ..."
Abstract
 Add to MetaCart
We describe in this dissertation, planning strategies which enhance the accuracy with which visual surveillance can be conducted and which expand the capabilities of visual surveillance systems. Several classes of planning strategies are considered: sensor planning, motion planning and temporal planning. Sensor planning is the study of the control of cameras to optimize information gathering for performing vision algorithms. The study of camera control spans camera placement strategies, active camera (specifically, PanTiltZoom or PTZ cameras) control, and, in some cases, camera selection from a collection of static cameras. Camera placement strategies have been employed previously for enhancing vision algorithms such as 3D reconstruction, area coverage in surveillance, occlusion and visibility analysis, etc. We will introduce a twocamera placement strategy that is utilized by a background subtraction algorithm, allowing it to achieve video rate performance and invariance to several illumination artifacts, such as lighting changes and shadows. While camera placement strategies can improve the performance of vision algorithms significantly, their utilities are limited in situations where it is more costeffective