Results 1  10
of
35
ModelBased Recognition in Robot Vision
 ACM Computing Surveys
, 1986
"... This paper presents a comparative study and survey of modelbased objectrecognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the “binpicking ” ..."
Abstract

Cited by 176 (0 self)
 Add to MetaCart
This paper presents a comparative study and survey of modelbased objectrecognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the “binpicking ” problem, in which the parts to be recognized are presented in a jumbled bin. The paper is organized according to 2D, 2&D, and 3D object representations, which are used as the basis for the recognition algorithms. Three
ModelBased Recognition and Localization From Sparse Range or Tactile Data
, 1983
"... This paper discusses how local measurements of threedimensional pool[ions and surface normals (recorded by a set of tactile sensors, or by threedimensional range sensors), may be used o identify and locate objects, from among a set, of known objects. The objects are modeled as po!yhedra having up t ..."
Abstract

Cited by 152 (7 self)
 Add to MetaCart
This paper discusses how local measurements of threedimensional pool[ions and surface normals (recorded by a set of tactile sensors, or by threedimensional range sensors), may be used o identify and locate objects, from among a set, of known objects. The objects are modeled as po!yhedra having up to six degrees of freedom relative to the sensors. We show tiat inconsistent, hypotheses about pairings between sensed points and object, surfaces can be discarded efficiently by using local constraints on: distoances bet,ween faces, angles betwee, face normals, and angles (reiatAve to t. he surface normals) of vectors between sensed points. We show by simulation and by mathematical bounds that the number of hypotheses consisten; with these constraints is small. We also show how to recover the position and orient, at, ion of the object from the sense daiwa. The algorithm's performance on data obt,ained from a triangulation range sensor is illustrated.
Representation and Recognition of FreeForm Surfaces
, 1992
"... We introduce a new surface representation for recognizing curved objects. Our approach begins by representing an object by a discrete mesh of points built from range data or from a geometric model of the object. The mesh is computed from the data by deforming a standard shaped mesh, for example, an ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
We introduce a new surface representation for recognizing curved objects. Our approach begins by representing an object by a discrete mesh of points built from range data or from a geometric model of the object. The mesh is computed from the data by deforming a standard shaped mesh, for example, an ellipsoid, until it fits the surface of the object. We define local regularity constraints that the mesh must satisfy. We then define a canonical mapping between the mesh describing the object and a standard spherical mesh. A surface curvature index that is poseinvariant is stored at every node of the mesh. We use this object representation for recognition by comparing the spherical model of a reference object with the model extracted from a new observed scene. We show how the similarity between reference model and observed data can be evaluated and we show how the pose of the reference object in the observed scene can be easily computed using this representation. We present results on real range images which show that this approach to modelling and recognizing threedimensional objects has three main advantages: First, it is applicable to complex curved surfaces that cannot be handled by conventional techniques. Second, it reduces the recognition problem to the computation of similarity between spherical distributions; in particular, the recognition algorithm does not require any combinatorial search. Finally, even though it is based on a spherical mapping, the approach can handle occlusions and partial views.
A robot vision system for recognizing 3D objects in loworder polynomial time
 IEEE Trans. Syst., Man, Cybern
, 1989
"... AhsrrucrThe two factors that determine the time complexity associated with modeldriven interpretation of range maps are: 1) the particular strategy used for the generation of object hypotheses; and 2) the manner in which both the model and the sensed data are organized, data organization being a p ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
AhsrrucrThe two factors that determine the time complexity associated with modeldriven interpretation of range maps are: 1) the particular strategy used for the generation of object hypotheses; and 2) the manner in which both the model and the sensed data are organized, data organization being a primary determinant of the efficiency of verification of a given hypothesis. 3DPOLY, a working system for recognizing objects in the presence of occlusion and against cluttered backgrounds is presented. The time complexity of this system is only O ( n *) for single object recognition, where 17 is the number of features on the object. The most novel aspect of this system is the manner in which the feature data are organized for the models; we use a data structure called the feature sphere for the purpose. Efficient constant time algorithms for assigning a feature to its proper place on a feature sphere and for extracting the neighbors of a given feature from the feature sphere representation are present. For hypothesis generation, we use local feature sets, a notion similar to those used before us by Rolles, Shirai and others. The combination of the feature sphere idea for streamlining verification and the local feature sets for hypothesis generation results in a system whose time complexity has a loworder polynomial bound. I.
From Surfaces to Objects: Computer Vision and ThreeDimensional Scene Analysis
, 1989
"... This book was originally published by John Wiley and Sons, ..."
Abstract

Cited by 33 (10 self)
 Add to MetaCart
This book was originally published by John Wiley and Sons,
3D Computer Vision Using Structured Light: Design, Calibration and Implementation Issues
 Design, Calibration, and Implementation Issues,” Advances in Computers(43
, 1996
"... Structured Light (SL) sensing is a well established method of range acquisition for Computer Vision. This chapter provides thorough discussions of design issues, calibration methodologies and implementation schemes for SL sensors. The challenges for SL sensor development are described and a range of ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
Structured Light (SL) sensing is a well established method of range acquisition for Computer Vision. This chapter provides thorough discussions of design issues, calibration methodologies and implementation schemes for SL sensors. The challenges for SL sensor development are described and a range of approaches are surveyed. A novel SL sensor, PRIME, the PRofile Imaging ModulE has recently been developed and is used as a design example in the detailed discussions. KEYWORDS: Computer Vision,Range Image Acquisition, Structured Light Ranging, RealTime Machine Vision, Sensor Calibration 0y This research is sponsored in part by grants awarded by the Japan Railways and the Office of Technology Development, U.S. Department of Energy. 1 Introduction Machine vision as a discipline and technology owes its creation, development and growth to digital computers. Without computers machine vision is not possible. The main objective of machine vision is to extract information useful for performin...
Extracting Surface Patches from Complete Range Descriptions”, To be presented at
 the International Conference on Recent Advances in 3D Imaging and Modeling
, 1997
"... Constructing a full CAD model of a part requires feature descriptions from all sides; in this case we consider surface patches as the geometric primitives. Most previous research in surface patch extraction has concentrated on extracting patches from a single view. This leads to several problems wit ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
(Show Context)
Constructing a full CAD model of a part requires feature descriptions from all sides; in this case we consider surface patches as the geometric primitives. Most previous research in surface patch extraction has concentrated on extracting patches from a single view. This leads to several problems with aligning and combining partial patch fragments in order to produce complete part models. We have avoided these problems by adapting our single view, range data segmentation program to extract patches, and thus models, directly from fully merged range datasets. 1.
Toward 3D Vision from Range Images: An Optimization Framework and Parallel Networks
"... We propose a unified approach to solve low, intermediate and high level computer vision problems for 3D object recognition from range images. All three levels of computation are cast in an optimization framework and can be implemented on neural network style architecture. In the low level computatio ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
We propose a unified approach to solve low, intermediate and high level computer vision problems for 3D object recognition from range images. All three levels of computation are cast in an optimization framework and can be implemented on neural network style architecture. In the low level computation, the tasks are to estimate curvature images from the input range data. Subsequent processing at the intermediate level is concerned with segmenting these curvature images into coherent curvature sign maps. In the high level, image features are matched against model features based on an object description called attributed relational graph (ARG). We show that the above computational tasks at each of the three different levels can all be formulated as optimizing a twoterm energy function. The first term encodes unary constraints while the second term binary ones. These energy functions are minimized using parallel and distributed relaxationbased algorithms which are well suited for neural...
Graph Matching Using a Direct Classification of Node Attendance
 Pattern Recognition Journal
, 1996
"... An algorithm has been developed that finds isomorphisms between both graphs and subgraphs. The development is introduced in the object recognition problem domain. The method isolates matching subgraphs, finds a nodetonode mapping and reorders nodes thus permitting a direct comparison to be made be ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
An algorithm has been developed that finds isomorphisms between both graphs and subgraphs. The development is introduced in the object recognition problem domain. The method isolates matching subgraphs, finds a nodetonode mapping and reorders nodes thus permitting a direct comparison to be made between the resultant graphs. The algorithm is of polynomial order. It yields approximate results, maintaining a performance level for subgraph isomorphisms at or above 95% under a wide variety of conditions and with varying levels of noise. The performance on the full size comparisons associated with graph isomorphisms has been found to be 100/100, also under a variety of conditions. Performance metrics, methods of testing and results are presented. KEYWORDS: Direct Classification, Graph Isomorphism, Subgraph Isomorphism, Graph Matching, Object Recognition. 1 Introduction Object recognition is fundamentally a problem of subgraph isomorphism in that a model describes objects in their entiret...
Tripod Operators for Recognizing Objects in Range Images; Rapid Rejection of Library Objects.
 IEEE Robotics and Automation (R&A
, 1992
"... The tripod operator is a class of feature extraction operators for range images which facilitate the recognition and localization of objects. It consists of three points in 3space fixed at the vertices of an equilateral triangle and a procedure for making several scalar measurements in the coordina ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
The tripod operator is a class of feature extraction operators for range images which facilitate the recognition and localization of objects. It consists of three points in 3space fixed at the vertices of an equilateral triangle and a procedure for making several scalar measurements in the coordinate frame of the triangle. The triangle is then moved as a rigid body until the three vertices lie on the surface of some range image or modeled object. The resulting measurements are local shape features which are invariant under rigid motions. These features can be used to automatically find distinctive regions at which to begin recognition, to rapidly screen candidate objects for a match, and to speed pruning in the generation of interpretation trees. Tripod operators are applicable to all 3D shapes, and reduce the need for specialized feature detectors. A key property is that they can be moved on the surface of an object in only three DOF (like a surveyor's tripod on the ground). Consequ...