Results 1  10
of
60
Feature detection with automatic scale selection
 International Journal of Computer Vision
, 1998
"... The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works ..."
Abstract

Cited by 716 (34 self)
 Add to MetaCart
The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a socalled scalespace representation. Traditional scalespace theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. This article proposes a systematic methodology for dealing with this problem. A framework is proposed for generating hypotheses about interesting scale levels in image data, based on a general principle stating that local extrema over scales of different combinations of γnormalized derivatives are likely candidates to correspond to interesting structures. Specifically, it is shown how this idea can be used as a major mechanism in algorithms for automatic scale selection, which
Face Recognition: the Problem of Compensating for Changes in Illumination Direction
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these varia ..."
Abstract

Cited by 348 (3 self)
 Add to MetaCart
(Show Context)
A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these variations. Examples of such representations are edge maps, image intensity derivatives, and images convolved with 2D Gaborlike filters. Here we present an empirical study that evaluates the sensitivity of these representations to changes in illumination, as well as viewpoint and facial expression. Our findings indicated that none of the representations considered is sufficient by itself to overcome image variations because of a change in the direction of illumination. Similar results were obtained for changes due to viewpoint and expression. Image representations that emphasized the horizontal features were found to be less sensitive to changes in the direction of illumination. However, systems...
Deformable Kernels for Early Vision
 IEEE Trans. Pattern Anal. Mach. Intell
, 1995
"... AbstractEarly vision algorithms often have a first stage of linearfiltering that ‘extracts ’ from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsel ..."
Abstract

Cited by 145 (11 self)
 Add to MetaCart
AbstractEarly vision algorithms often have a first stage of linearfiltering that ‘extracts ’ from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. This discretization produces anisotropies due to a loss of translation, rotation, and scalinginvariance that makes early vision algorithms less precise and more difficult to design. This need not be so: one can compute and store efficiently the response of families of linear filters defined on a continuum of orientations and scales. A technique is presented that allows 1) computing the best approximation of a given family using linear combinations of a small number of ‘basis ’ functions; 2) describing all finitedimensional families, i.e., the families of filters for which a finite dimensional representation is possible with no error. The technique is based on singular value decomposition and may be applied to generating filters in arbitrary dimensions and subject to arbitrary deformations; the relevant functional analysis results are reviewed and precise conditions for the decomposition to be feasible are stated. Experimental results are presented that demonstrate the applicability of the technique to generating multiorientation multiscale 2D edgedetection kernels. The implementation issues are also discussed. Index TermsSteerable filters, wavelets, early vision, multiresolution image analysis, multirate filtering, deformable filters, scalespace I.
Measurement of Color Invariants
, 2000
"... This paper presents the measurement of object reflectance from color images. We exploit the Gaussian scalespace paradigm to de£ne a framework for the robust measurement of object reflectance from color images. Illumination and geometrical invariant properties are derived from a physical reflectance ..."
Abstract

Cited by 143 (38 self)
 Add to MetaCart
This paper presents the measurement of object reflectance from color images. We exploit the Gaussian scalespace paradigm to de£ne a framework for the robust measurement of object reflectance from color images. Illumination and geometrical invariant properties are derived from a physical reflectance model based on the KubelkaMunk theory. Imaging conditions are assumed to be white illumination and matte, dull object or general object, respectively, summarized by: shadow highlights illumination illumination intensity color
Logical/Linear Operators for Image Curves
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1995
"... We propose a language for designing image measurement operators suitable for early vision. We refer to them as logical/linear (L/L) operators, since they unify aspects of linear operator theory and boolean logic. A family of these operators appropriate for measuring the loworder differential struct ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
(Show Context)
We propose a language for designing image measurement operators suitable for early vision. We refer to them as logical/linear (L/L) operators, since they unify aspects of linear operator theory and boolean logic. A family of these operators appropriate for measuring the loworder differential structure of image curves is developed. These L/L operators are derived by decomposing a linear model into logical components to ensure that certain structural preconditions for the existence of an image curve are upheld. Tangential conditions guarantee continuity, while normal conditions select and categorize contrast profiles. The resulting operators allow for coarse measurement of curvilinear differential structure (orientation and curvature) while successfully segregating edge and linelike features. By thus reducing the incidence of falsepositive responses, these operators are a substantial improvement over (thresholded) linear operators which attempt to resolve the same class of features. ...
On scale selection for differential operators
 8TH SCIA
, 1993
"... Although traditional scalespace theory provides a wellfounded framework for dealing with image structures at different scales, it does not directly address the problem of how to select appropriate scales for further analysis. This paper introduces a new tool for dealing with this problem. A heur ..."
Abstract

Cited by 53 (11 self)
 Add to MetaCart
(Show Context)
Although traditional scalespace theory provides a wellfounded framework for dealing with image structures at different scales, it does not directly address the problem of how to select appropriate scales for further analysis. This paper introduces a new tool for dealing with this problem. A heuristic principle is proposed stating that local extrema over scales of different combinations of normalized scale invariant derivatives are likely candidates to correspond to interesting structures. Support is given by theoretical considerations and experiments on real and synthetic data. The resulting methodology lends itself naturally to twostage algorithms; feature detection at coarse scales followed by feature localization at ner scales. Experiments on blob detection, junction detection and edge detection demonstrate that the proposed method gives intuitively reasonable results.
The Topological Structure of ScaleSpace Images
, 1998
"... We investigate the "deep structure" of a scalespace image. The emphasis is on topology, i.e. we concentrate on critical pointspoints with vanishing gradientand toppointscritical points with degenerate Hessianand monitor their displacements, respectively generic morsifications ..."
Abstract

Cited by 50 (24 self)
 Add to MetaCart
(Show Context)
We investigate the "deep structure" of a scalespace image. The emphasis is on topology, i.e. we concentrate on critical pointspoints with vanishing gradientand toppointscritical points with degenerate Hessianand monitor their displacements, respectively generic morsifications in scalespace. Relevant parts of catastrophe theory in the context of the scalespace paradigm are briefly reviewed, and subsequently rewritten into coordinate independent form. This enables one to implement topological descriptors using a conveniently defined, global coordinate system. 1 Introduction 1.1 Historical Background A fairly well understood way to endow an image with a topology is to embed it into a oneparameter family of images known as a "scalespace image". The parameter encodes "scale" or "resolution" (coarse/fine scale means low/high resolution, respectively). Among the simplest is the linear or Gaussian scalespace model. Proposed by Iijima [13] in the context of pattern recogniti...
Threedimensional object recognition based on the combination of views
 Cognition
, 1998
"... Visual object recognition is complicated by the fact that the same 3D object can give rise to a large variety of projected images that depend on the viewing conditions, such as viewing direction, distance, and illumination. This paper describes a computational approach that uses combinations of a sm ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
Visual object recognition is complicated by the fact that the same 3D object can give rise to a large variety of projected images that depend on the viewing conditions, such as viewing direction, distance, and illumination. This paper describes a computational approach that uses combinations of a small number of object views to deal with the effects of viewing direction. The first part of the paper is an overview of the approach based on previous work. It is then shown that, in agreement with psychophysical evidence, the viewcombinations approach can use views of different class members rather than multiple views of a single object, to obtain classbased generalization. A number of extensions to the basic scheme are considered, including the use of nonlinear combinations, using 3D versus 2D information, and the role of coarse classification on the way to precise identification. Finally, psychophysical and biological aspects of the viewcombination approach are discussed. Compared with approaches that treat object recognition as a symbolic highlevel activity, in the viewcombination approach the emphasis is on processes that are simpler and pictorial in nature. © 1998 Elsevier Science B.V. All rights reserved Keywords: Threedimensional object recognition; View combinations; Classification 1. Recognition and the variability of object views For biological visual systems, visual object recognition is a spontaneous, natural activity. In contrast, the recognition of common objects is still beyond the capabilities of current computer vision systems. In this paper I will examine certain aspects of the recognition problem and outline an approach to recognition based on the
Shape from texture from a multiscale perspective
 Fourth Int. Conf. Comp. Vision
, 1993
"... ..."
(Show Context)
Discrete derivative approximations with scalespace properties: A basis for lowlevel feature extraction
 J. Math. Imaging Vision
, 1993
"... It is developed how discrete derivative approximations can be de ned so that scalespace properties hold exactly also in the discrete domain. Starting from a set of natural requirements on the rst processing stages of a visual system, the visual front end, an axiomatic derivation is given of how amu ..."
Abstract

Cited by 37 (15 self)
 Add to MetaCart
(Show Context)
It is developed how discrete derivative approximations can be de ned so that scalespace properties hold exactly also in the discrete domain. Starting from a set of natural requirements on the rst processing stages of a visual system, the visual front end, an axiomatic derivation is given of how amultiscale representation of derivative approximations can be constructed from a discrete signal, so that it possesses an algebraic structure similar to that possessed by the derivatives of the traditional scalespace representation in the continuous domain. A family of kernels is derived which constitute discrete analogues to the continuous Gaussian derivatives. The representation has theoretical advantages to other discretizations of the scalespace theory in the sense that operators which commute before discretization commute after discretization. Some computational implications of this are that derivativeapproximations can be computed directly from smoothed data, and that this will give exactly the same result as convolution with the corresponding derivative approximation kernel. Moreover, a number of normalization conditions are automatically satis ed. The proposed methodology leads to a conceptually very simple scheme of computations for multiscale lowlevel feature extraction, consisting of four basic steps � (i) large support convolution smoothing, (ii) small support di erence computations, (iii) point operations for computing di erential geometric entities, and (iv) nearest neighbour operations for feature detection. Applications are given demonstrating how the proposed scheme can be used for edge detection and junction detection based on derivatives up to order three.