Results 1  10
of
238
A computational approach to edge detection
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1986
"... AbstractThis paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal ..."
Abstract

Cited by 4621 (0 self)
 Add to MetaCart
(Show Context)
AbstractThis paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussiansmoothed image. We extend this simple detector using operators of several widths to cope with different signaltonoise ratios in the image. We present a general method, called feature synthesis, for the finetocoarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge. This detection scheme uses several elongated operators at each point, and the directional operator outputs are integrated with the gradient maximum detector. Index TermsEdge detection, feature extraction, image processing, machine vision, multiscale image analysis. I.
Edge Detection
, 1985
"... For both biological systems and machines, vision begins with a large and unwieldy array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene, such as the location of object boundaries and the s ..."
Abstract

Cited by 1277 (1 self)
 Add to MetaCart
For both biological systems and machines, vision begins with a large and unwieldy array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene, such as the location of object boundaries and the structure, color and texture of object surfaces, from the twodimensional image that is projected onto the eye or camera. This goal is not achieved in a single step; vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clue about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processg has led to extensive research on their detection, description and .use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.
Robust object recognition with cortexlike mechanisms
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 2007
"... Abstract—We introduce a new general framework for the recognition of complex visual scenes, which is motivated by biology: We describe a hierarchical system that closely follows the organization of visual cortex and builds an increasingly complex and invariant feature representation by alternating b ..."
Abstract

Cited by 388 (48 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a new general framework for the recognition of complex visual scenes, which is motivated by biology: We describe a hierarchical system that closely follows the organization of visual cortex and builds an increasingly complex and invariant feature representation by alternating between a template matching and a maximum pooling operation. We demonstrate the strength of the approach on a range of recognition tasks: From invariant single object recognition in clutter to multiclass categorization problems and complex scene understanding tasks that rely on the recognition of both shapebased as well as texturebased objects. Given the biological constraints that the system had to satisfy, the approach performs surprisingly well: It has the capability of learning from only a few training examples and competes with stateoftheart systems. We also discuss the existence of a universal, redundant dictionary of features that could handle the recognition of most object categories. In addition to its relevance for computer vision, the success of this approach suggests a plausibility proof for a class of feedforward models of object recognition in cortex.
Face Recognition: the Problem of Compensating for Changes in Illumination Direction
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these varia ..."
Abstract

Cited by 348 (3 self)
 Add to MetaCart
(Show Context)
A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these variations. Examples of such representations are edge maps, image intensity derivatives, and images convolved with 2D Gaborlike filters. Here we present an empirical study that evaluates the sensitivity of these representations to changes in illumination, as well as viewpoint and facial expression. Our findings indicated that none of the representations considered is sufficient by itself to overcome image variations because of a change in the direction of illumination. Similar results were obtained for changes due to viewpoint and expression. Image representations that emphasized the horizontal features were found to be less sensitive to changes in the direction of illumination. However, systems...
Edge Detection and Ridge Detection with Automatic Scale Selection
 CVPR'96
, 1996
"... When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of ..."
Abstract

Cited by 343 (24 self)
 Add to MetaCart
(Show Context)
When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting onedimensional features, such as edges and ridges. Anovel concept of a scalespace edge is introduced, defined as a connected set of points in scalespace at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important property of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analysed in detail. It is shown that by expressing these in terms of &gamma;normalized derivatives, an immediate consequence of this definition is that fine scales are selected for sharp edges (so as to reduce the shape distortions due to scalespace smoothing), whereas coarse scales are selected for diffuse edges, such that an edge model constitutes a valid abstraction of the intensity profile across the edge. With slight modifications, this idea can be used for formulating a ridge detector with automatic scale selection, having the characteristic property that the selected scales on a scalespace ridge instead reflect the width of the ridge.
A generalized Gaussian image model for edgepreserving MAP estimation
 IEEE Trans. on Image Processing
, 1993
"... Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distri ..."
Abstract

Cited by 300 (36 self)
 Add to MetaCart
(Show Context)
Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisifies several desirable analytical and computational properties for MAP estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the U posteriori loglikeihood function. The GGMRF is demonstrated to be useful for image reconstruction in lowdosage transmission tomography. I.
BSpline Signal Processing: Part ITheory
 IEEE Trans. Signal Processing
, 1993
"... This paper describes a set of efficient filtering techniques for the processing and representation of signals in terms of continuous Bspline basis functions. We first consider the problem of determining the spline coefficients for an exact signal interpolation (direct Bspline transform). The rever ..."
Abstract

Cited by 160 (31 self)
 Add to MetaCart
(Show Context)
This paper describes a set of efficient filtering techniques for the processing and representation of signals in terms of continuous Bspline basis functions. We first consider the problem of determining the spline coefficients for an exact signal interpolation (direct Bspline transform). The reverse operation is the signal reconstruction from its spline coefficients with an optional zooming factor rn (indirect Bspline transform) . We derive general expressions for the z transforms and the equivalent continuous impulse responses of Bspline interpolators of order n. We present simple techniques for signal differentiation and filtering in the transformed domain. We then derive recursive filters that efficiently solve the problems of smoothing spline and least squares approximations. The smoothing spline technique approximates a signal with a complete set of coefficients subject to certain regularization or smoothness constraints. The least squares approach, on the other hand, uses a reduced number of Bspline coefficients with equally spaced nodes; this technique is in many ways analogous to the application of antialiasing lowpass filter prior to decimation in order to represent a signal correctly with a reduced number of samples.
Color TV: Total Variation Methods for Restoration of Vector Valued Images
 IEEE Trans. Image Processing
, 1996
"... We propose a new definition of the total variation norm for vector valued functions which can be applied to restore color and other vector valued images. The new TV norm has the desirable properties of (i) not penalizing discontinuities (edges) in the image, (ii) rotationally invariant in the image ..."
Abstract

Cited by 158 (13 self)
 Add to MetaCart
(Show Context)
We propose a new definition of the total variation norm for vector valued functions which can be applied to restore color and other vector valued images. The new TV norm has the desirable properties of (i) not penalizing discontinuities (edges) in the image, (ii) rotationally invariant in the image space, and (iii) reduces to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in RGB color space are presented. 1 Introduction During gathering and transfer of image data some noise and blur is usually introduced into the image. Several reconstruction methods based on the total variation (TV) norm have been proposed and studied for intensity (gray scale) images, see [9, 14, 21, 26, 29]. Since these methods have been successful in reducing noise and blur without smearing sharp edges for intensity images, it is natural to extend the TV norm to handle color and other vector valued images. Why do we need color restoration? It can be argued that si...
Hybrid Image Segmentation Using Watersheds and Fast Region Merging
 IEEE transactions on Image Processing
, 1998
"... Abstract—A hybrid multidimensional image segmentation algorithm is proposed, which combines edge and regionbased techniques through the morphological algorithm of watersheds. An edgepreserving statistical noise reduction approach is used as a preprocessing stage in order to compute an accurate est ..."
Abstract

Cited by 141 (1 self)
 Add to MetaCart
(Show Context)
Abstract—A hybrid multidimensional image segmentation algorithm is proposed, which combines edge and regionbased techniques through the morphological algorithm of watersheds. An edgepreserving statistical noise reduction approach is used as a preprocessing stage in order to compute an accurate estimate of the image gradient. Then, an initial partitioning of the image into primitive regions is produced by applying the watershed transform on the image gradient magnitude. This initial segmentation is the input to a computationally efficient hierarchical (bottomup) region merging process that produces the final segmentation. The latter process uses the region adjacency graph (RAG) representation of the image regions. At each step, the most similar pair of regions is determined (minimum cost RAG edge), the regions are merged and the RAG is updated. Traditionally, the above is implemented by storing all RAG edges in a priority queue. We propose a significantly faster algorithm, which additionally maintains the socalled nearest neighbor graph, due to which the priority queue size and processing time are drastically reduced. The final segmentation provides, due to the RAG, onepixel wide, closed, and accurately localized contours/surfaces. Experimental results obtained with twodimensional/threedimensional (2D/3D) magnetic resonance images are presented. Index Terms — Image segmentation, nearest neighbor region merging, noise reduction, watershed transform. I.
A Computational Approach for Corner and Vertex Detection
 International Journal of Computer Vision
, 1992
"... Corners and vertices are strong and useful features in Computer Vision for scene analysis, stereo matching and motion analysis. This paper deals with the development of a computational approach to these important features. We consider first a corner model and study analytically its behavior once it ..."
Abstract

Cited by 131 (1 self)
 Add to MetaCart
(Show Context)
Corners and vertices are strong and useful features in Computer Vision for scene analysis, stereo matching and motion analysis. This paper deals with the development of a computational approach to these important features. We consider first a corner model and study analytically its behavior once it has been smoothed using the wellknown Gaussian filter. This allows us to clarify the behavior of some well known cornerness measure based approaches used to detect these points of interest. Most of these classical approaches appear to detect points that do not correspond to the exact position of the corner. A new scalespace based approach that combines useful properties from the Laplacian and Beaudet's measure [Bea78] is then proposed in order to correct and detect exactly the corner position. An extension of this approach is then developed to solve the problem of trihedral vertex characterization and detection. In particular, it is shown that a trihedral vertex has two elliptic maxima on ...