Results 1  10
of
104
ModelBased Recognition in Robot Vision
 ACM Computing Surveys
, 1986
"... This paper presents a comparative study and survey of modelbased objectrecognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the “binpicking ” ..."
Abstract

Cited by 161 (0 self)
 Add to MetaCart
This paper presents a comparative study and survey of modelbased objectrecognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the “binpicking ” problem, in which the parts to be recognized are presented in a jumbled bin. The paper is organized according to 2D, 2&D, and 3D object representations, which are used as the basis for the recognition algorithms. Three
Illustrating Surface Shape in Volume Data via Principal DirectionDriven 3D Line Integral Convolution
, 1997
"... The threedimensional shape and relative depth of a smoothly curving layered transparent surface may be communicated particularly effectively when the surface is artistically enhanced with sparsely distributed opaque detail. This paper describes how the set of principal directions and principal curv ..."
Abstract

Cited by 110 (11 self)
 Add to MetaCart
The threedimensional shape and relative depth of a smoothly curving layered transparent surface may be communicated particularly effectively when the surface is artistically enhanced with sparsely distributed opaque detail. This paper describes how the set of principal directions and principal curvatures specified by local geometric operators can be understood to define a natural "flow " over the surface of an object, and can be used to guide the placement of the lines of a stroke texture that seeks to represent 3D shape information in a perceptually intuitive way. The driving application for this work is the visualization of layered isovalue surfaces in volume data, where the particular identity of an individual surface is not generally known a priori and observers will typically wish to view a variety of different level surfaces from the same distribution, superimposed over underlying opaque structures. By advecting an evenly distributed set of tiny opaque particles, and the empty space between them, via 3D line integral convolution through the vector field defined by the principal directions and principal curvatures of the level surfaces passing through each gridpoint of a 3D volume, it is possible to generate a
Viewpoint Invariant Texture Matching and Wide Baseline Stereo
 In Proc. ICCV
, 2001
"... We describe and demonstrate a texture region descriptor which is invariant to affine geometric and photometric transformations, and insensitive to the shape of the texture region. It is applicable to texture patches which are locally planar and have stationary statistics. The novelty of the descript ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
We describe and demonstrate a texture region descriptor which is invariant to affine geometric and photometric transformations, and insensitive to the shape of the texture region. It is applicable to texture patches which are locally planar and have stationary statistics. The novelty of the descriptor is that it is based on statistics aggregated over the region, resulting in richer and more stable descriptors than those computed at a point. Two texture matching applications of this descriptor are demonstrated: (1) it is used to automatically identify regions of the same type of texture, but with varying surface pose, within a single image
Computing Local Surface Orientation and Shape from Texture for Curved Surfaces
, 1997
"... Shape from texture is best analyzed in two stages, analogous to stereopsis and structure from motion: (a) Computing the `texture distortion' from the image, and (b) Interpreting the `texture distortion' to infer the orientation and shape of the surface in the scene. We model the texture distortion f ..."
Abstract

Cited by 88 (4 self)
 Add to MetaCart
Shape from texture is best analyzed in two stages, analogous to stereopsis and structure from motion: (a) Computing the `texture distortion' from the image, and (b) Interpreting the `texture distortion' to infer the orientation and shape of the surface in the scene. We model the texture distortion for a given point and direction on the image plane as an affine transformation and derive the relationship between the parameters of this transformation and the shape parameters. We have developed a technique for estimating affine transforms between nearby image patches which is based on solving a system of linear constraints derived from a differential analysis. One need not explicitly identify texels or make restrictive assumptions about the nature of the texture such as isotropy. We use nonlinear minimization of a least squares error criterion to recover the surface orientation (slant and tilt) and shape (principal curvatures and directions) based on the estimated affine transforms in a number of different directions. A simple linear algorithm based on singular value decomposition of the linear parts of the affine transforms provides the initial guess for the minimization procedure. Experimental results on both planar and curved surfaces under perspective projection demonstrate good estimates for both orientation and shape. A sensitivity analysis yields predictions for both computer vision algorithms and human perception of shape from texture.
Shapeadapted smoothing in estimation of 3D depth cues from affine distortions of local 2D brightness structure
 IN PROC. 3RD EUROPEAN CONF. ON COMPUTER VISION
, 1994
"... Rotationally symmetric operations in the image domain may give rise to shape distortions. This article describes a way of reducing this effect for a general class of methods for deriving 3D shape cues from 2D image data, which are based on the estimation of locally linearized distortion of brightn ..."
Abstract

Cited by 68 (13 self)
 Add to MetaCart
Rotationally symmetric operations in the image domain may give rise to shape distortions. This article describes a way of reducing this effect for a general class of methods for deriving 3D shape cues from 2D image data, which are based on the estimation of locally linearized distortion of brightness patterns. By extending the linear scalespace concept into an affine scalespace representation and performing affine shape adaption of the smoothing kernels, the accuracy of surface orientation estimates derived from texture and disparity cues can be improved by typically one order of magnitude. The reason for this is that the image descriptors, on which the methods are based, will be relative invariant under a ne transformations, and the error will thus be confined to the higherorder terms in the locally linearized perspective mapping.
Canonical Frames for Planar Object Recognition
, 1992
"... We present a canonical frame construction for determining projectively invariant indexing functions for nonalgebraic smooth plane curves. These invariants are semilocal rather than global, which promotes tolerance to occlusion. Two applications are demonstrated. Firstly, we report preliminary work ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
We present a canonical frame construction for determining projectively invariant indexing functions for nonalgebraic smooth plane curves. These invariants are semilocal rather than global, which promotes tolerance to occlusion. Two applications are demonstrated. Firstly, we report preliminary work on building a model based recognition system for planar objects. We demonstrate that the invariant measures, derived from the canonical frame, provide sufficient discrimination between objects to be useful for recognition. Recognition is of partially occluded objects in cluttered scenes. Secondly, jigsaw puzzles are assembled and rendered from a single strongly perspective view of the separate pieces. Both applications require no camera calibration or pose information, and models are generated and verified directly from images.
Shapeadapted smoothing in estimation of 3D shape cues from affine distortions of local 2D brightness structure
, 2001
"... This article describes a method for reducing the shape distortions due to scalespace smoothing that arise in the computation of 3D shape cues using operators (derivatives) de ned from scalespace representation. More precisely, we are concerned with a general class of methods for deriving 3D shap ..."
Abstract

Cited by 52 (3 self)
 Add to MetaCart
This article describes a method for reducing the shape distortions due to scalespace smoothing that arise in the computation of 3D shape cues using operators (derivatives) de ned from scalespace representation. More precisely, we are concerned with a general class of methods for deriving 3D shape cues from 2D image data based on the estimation of locally linearized deformations of brightness patterns. This class