Results 1  10
of
43
A computational approach to edge detection
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1986
"... AbstractThis paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal ..."
Abstract

Cited by 3088 (0 self)
 Add to MetaCart
AbstractThis paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussiansmoothed image. We extend this simple detector using operators of several widths to cope with different signaltonoise ratios in the image. We present a general method, called feature synthesis, for the finetocoarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge. This detection scheme uses several elongated operators at each point, and the directional operator outputs are integrated with the gradient maximum detector. Index TermsEdge detection, feature extraction, image processing, machine vision, multiscale image analysis. I.
Iterative point matching for registration of freeform curves and surfaces
, 1994
"... A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in ma ..."
Abstract

Cited by 480 (6 self)
 Add to MetaCart
A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subsetsubset matching. A leastsquares technique is used to estimate 3D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate.
Kalman Filterbased Algorithms for Estimating Depth from Image Sequences
, 1989
"... Using known camera motion to estimate depth from image sequences is an important problem in robot vision. Many applications of depthfrommotion, including navigation and manipulation, require algorithms that can estimate depth in an online, incremental fashion. This requires a representation that ..."
Abstract

Cited by 214 (26 self)
 Add to MetaCart
Using known camera motion to estimate depth from image sequences is an important problem in robot vision. Many applications of depthfrommotion, including navigation and manipulation, require algorithms that can estimate depth in an online, incremental fashion. This requires a representation that records the uncertainty in depth estimates and a mechanism that integrates new measurements with existing depth estimates to reduce the uncertainty over time. Kalman filtering provides this mechanism. Previous applications of Kalman filtering to depthfrommotion have been limited to estimating depth at the location of a sparse set of features. In this paper, we introduce a new, pixelbased (iconic) algorithm that estimates depth and depth uncertainty at each pixel and incrementally refines these estimates over time. We describe the algorithm and contrast its formulation and performance to that of a featurebased Kalman filtering algorithm. We compare the performance of the two approaches by analyzing their theoretical convergence rates, by conducting quantitative experiments with images of a flat poster, and by conducting qualitative experiments with images of a realistic outdoorscene model. The results show that the new method is an effective way to extract depth from lateral camera translations. This approach can be extended to incorporate general motion and to integrate other sources of information, such as stereo. The algorithms we have developed, which combine Kalman filtering with iconic descriptions of depth, therefore can serve as a useful and general framework for lowlevel dynamic vision.
ModelBased Recognition and Localization From Sparse Range or Tactile Data
, 1983
"... This paper discusses how local measurements of threedimensional pool[ions and surface normals (recorded by a set of tactile sensors, or by threedimensional range sensors), may be used o identify and locate objects, from among a set, of known objects. The objects are modeled as po!yhedra having up t ..."
Abstract

Cited by 143 (7 self)
 Add to MetaCart
This paper discusses how local measurements of threedimensional pool[ions and surface normals (recorded by a set of tactile sensors, or by threedimensional range sensors), may be used o identify and locate objects, from among a set, of known objects. The objects are modeled as po!yhedra having up to six degrees of freedom relative to the sensors. We show tiat inconsistent, hypotheses about pairings between sensed points and object, surfaces can be discarded efficiently by using local constraints on: distoances bet,ween faces, angles betwee, face normals, and angles (reiatAve to t. he surface normals) of vectors between sensed points. We show by simulation and by mathematical bounds that the number of hypotheses consisten; with these constraints is small. We also show how to recover the position and orient, at, ion of the object from the sense daiwa. The algorithm's performance on data obt,ained from a triangulation range sensor is illustrated.
A Review of Statistical Data Association Techniques for Motion Correspondence
 International Journal of Computer Vision
, 1993
"... Motion correspondence is a fundamental problem in computer vision and many other disciplines. This article describes statistical data association techniques originally developed in the context of target tracking and surveillance and now beginning to be used in dynamic motion analysis by the computer ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
Motion correspondence is a fundamental problem in computer vision and many other disciplines. This article describes statistical data association techniques originally developed in the context of target tracking and surveillance and now beginning to be used in dynamic motion analysis by the computer vision community. The Mahalanobis distance measure is first introduced before discussing the limitations of nearest neighbor algorithms. Then, the tracksplitting, joint likelihood, multiple hypothesis algorithms are described, each method solving an increasingly more complicated optimization. Realtime constraints may prohibit the application of these optimal methods. The suboptimal joint probabilistic data association algorithm is therefore described. The advantages, limitations, and relationships between the approaches are discussed. 1
Surfaces from Stereo: Integrating Feature Matching, Disparity Estimation, and Contour Detection
 PAMI
, 1989
"... AbstractThe goal of stereo algorithms is to determine the threedimensional distance, or depth, of objects from a stereo pair of images. The usual approach is to first identify corresponding features between the two images and estimate their depths, then interpolate to obtain a complete distance or ..."
Abstract

Cited by 76 (0 self)
 Add to MetaCart
AbstractThe goal of stereo algorithms is to determine the threedimensional distance, or depth, of objects from a stereo pair of images. The usual approach is to first identify corresponding features between the two images and estimate their depths, then interpolate to obtain a complete distance or depth map. Traditionally, finding the corresponding features has been considered to be the most difficult problem. Also, occluding and ridge contours (depth and orientation discontinuities) have not been explicitly detected which has made surface interpolation difficult. The approach described in this paper integrates the processes of feature matching, contour detection, and surface interpolation. Integration is necessary to ensure that the detected surfaces are smooth. Surface interpolation takes into account detected occluding and ridge contours in the scene; interpolation is performed within regions enclosed by these contours. Planar and quadratic patches are used as local models of the surface. Occluded regions in the image are identified, and are not used for matching and interpolation. A coarsetofine algorithm is presented that generates a multiresolution hierarchy of surface maps, one at each level of resolution. Experimental results are given for a variety of stereo images. Index TermsBoundary detection, feature matching, integration, stereo vision, surface interpolation, threedimensional segmentation, threedimensional vision. I.
Shape from shadows
 Journal of Experimental Psychology: Human Perception & Performance
, 1990
"... The colors, textures, and shapes of shadows are physically constrained in several ways in natural scenes. The visual system appears to ignore these constraints, however, and to accept many patterns as shadows even though they could not occur naturally. In the stimuli that we have studied, the only r ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
The colors, textures, and shapes of shadows are physically constrained in several ways in natural scenes. The visual system appears to ignore these constraints, however, and to accept many patterns as shadows even though they could not occur naturally. In the stimuli that we have studied, the only requirements for the perception of depth due to shadows were that shadow regions be darker than the surrounding, nonshadow regions and that there be consistent contrast polarity along the shadow border. Threedimensional shape due to shadows was perceived when shadow areas were filled with colors or textures that could not occur in natural scenes, when shadow and nonshadow regions had textures that moved in different directions, or when they were presented on different depth planes. The results suggest that the interpretation of shadows begins with the identification of acceptable shadow borders by a cooperative process that requires consistent contrast polarity across a range of scales at each point along the border. Finally, we discuss how the identification of a shadow region can help the visual system to patch together areas that are separated by shadow boundaries, to identify directions of surface curvature, and to select a preferred threedimensional interpretation while rejecting others. How does the visual system identify and use shadow information in a scene? In general, objects are not illuminated uniformly from all directions, and the directed nature of the light produces both shading and shadow cues to the object shape. Shading, the variation of reflected flux with the angle between the incident light and the surface normal (Ikeuchi & Horn, 1981; Pentland, 1982; Woodham, 1981, 1984), can give information concerning surface orientation in areas receiving direct illumination. On the other hand, a shadow area is blocked from direct illumination (Beck, 1972; Berbaum,
Motion of an Uncalibrated Stereo Rig: SelfCalibration and Metric Reconstruction
 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION
, 1993
"... We address in this paper the problem of selfcalibration and metric reconstruction (up to a scale) from one unknown motion of an uncalibrated stereo rig, assuming the coordinates of the principal point of each camera are known (This assumption is not necessary if one more motion is available). The e ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
We address in this paper the problem of selfcalibration and metric reconstruction (up to a scale) from one unknown motion of an uncalibrated stereo rig, assuming the coordinates of the principal point of each camera are known (This assumption is not necessary if one more motion is available). The epipolar constraint is first formulated for two uncalibrated images. The problem then becomes one of estimating unknowns such that the discrepancy from the epipolar constraint, in terms of distances between points and their corresponding epipolar lines, is minimized. The initialization of the unknowns is based on the work of Maybank, Luong and Faugeras on selfcalibration of a single moving camera, which requires to solve a set of socalled Kruppa equations. Redundancy of the information contained in a sequence of stereo images makes this method more robust than using a sequence of monocular images. Real data have been used to test the proposed method, and the results obtained are quite good.
Techniques for disparity measurement
 CVGIP: IU
, 1991
"... Many different approaches have been suggested for the measurement of structure in space from spatially separated cameras. In this report we critically examine some of these techniques. Through a series of examples we show that none of the current mechanisms of disparity measurement are particularly ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Many different approaches have been suggested for the measurement of structure in space from spatially separated cameras. In this report we critically examine some of these techniques. Through a series of examples we show that none of the current mechanisms of disparity measurement are particularly robust. By considering some of the implications of disparity in the frequency domain, we present a new definition of disparity that is tied to the interocular phase difference in bandpass versions of the monocular images. Finally, we present a new technique for measuring disparity as the local phase difference between bandpass versions of the two images, and we show how this technique surmounts some of the difficulties encountered by current disparity detection mechanisms. 0 1991 Academic Press, Inc. 1.