Results 1  10
of
109
Face Recognition: the Problem of Compensating for Changes in Illumination Direction
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these varia ..."
Abstract

Cited by 269 (3 self)
 Add to MetaCart
A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these variations. Examples of such representations are edge maps, image intensity derivatives, and images convolved with 2D Gaborlike filters. Here we present an empirical study that evaluates the sensitivity of these representations to changes in illumination, as well as viewpoint and facial expression. Our findings indicated that none of the representations considered is sufficient by itself to overcome image variations because of a change in the direction of illumination. Similar results were obtained for changes due to viewpoint and expression. Image representations that emphasized the horizontal features were found to be less sensitive to changes in the direction of illumination. However, systems...
Edge Detection and Ridge Detection with Automatic Scale Selection
 CVPR'96
, 1996
"... When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of ..."
Abstract

Cited by 247 (20 self)
 Add to MetaCart
When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting onedimensional features, such as edges and ridges. Anovel concept of a scalespace edge is introduced, defined as a connected set of points in scalespace at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important property of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analysed in detail. It is shown that by expressing these in terms of γnormalized derivatives, an immediate consequence of this definition is that fine scales are selected for sharp edges (so as to reduce the shape distortions due to scalespace smoothing), whereas coarse scales are selected for diffuse edges, such that an edge model constitutes a valid abstraction of the intensity profile across the edge. With slight modifications, this idea can be used for formulating a ridge detector with automatic scale selection, having the characteristic property that the selected scales on a scalespace ridge instead reflect the width of the ridge.
Constructing Simple Stable Descriptions for Image Partitioning
, 1994
"... A new formulation of the image partitioning problem is presented: construct a complete and stable description of an image, in terms of a specified descriptive language, that is simplest in the sense of being shortest. We show that a descriptive language limited to a loworder polynomial description ..."
Abstract

Cited by 223 (5 self)
 Add to MetaCart
A new formulation of the image partitioning problem is presented: construct a complete and stable description of an image, in terms of a specified descriptive language, that is simplest in the sense of being shortest. We show that a descriptive language limited to a loworder polynomial description of the intensity variation within each region and a chaincodelike description of the region boundaries yields intuitively satisfying partitions for a wide class of images. The advantage of this formulation is that it can be extended to deal with subsequent steps of the imageunderstanding problem (or to deal with other image attributes, such as texture) in a natural way by augmenting the descriptive language. Experiments performed on a variety of both real and synthetic images demonstrate the superior performance of this approach over partitioning techniques based on clustering vectors of local image attributes and standard edgedetection techniques. 1 Introduction The partitioning proble...
SUSAN  A New Approach to Low Level Image Processing
 International Journal of Computer Vision
, 1995
"... This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. ..."
Abstract

Cited by 205 (3 self)
 Add to MetaCart
This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction.
BSpline Signal Processing: Part ITheory
 IEEE Trans. Signal Processing
, 1993
"... This paper describes a set of efficient filtering techniques for the processing and representation of signals in terms of continuous Bspline basis functions. We first consider the problem of determining the spline coefficients for an exact signal interpolation (direct Bspline transform). The rever ..."
Abstract

Cited by 116 (24 self)
 Add to MetaCart
This paper describes a set of efficient filtering techniques for the processing and representation of signals in terms of continuous Bspline basis functions. We first consider the problem of determining the spline coefficients for an exact signal interpolation (direct Bspline transform). The reverse operation is the signal reconstruction from its spline coefficients with an optional zooming factor rn (indirect Bspline transform) . We derive general expressions for the z transforms and the equivalent continuous impulse responses of Bspline interpolators of order n. We present simple techniques for signal differentiation and filtering in the transformed domain. We then derive recursive filters that efficiently solve the problems of smoothing spline and least squares approximations. The smoothing spline technique approximates a signal with a complete set of coefficients subject to certain regularization or smoothness constraints. The least squares approach, on the other hand, uses a reduced number of Bspline coefficients with equally spaced nodes; this technique is in many ways analogous to the application of antialiasing lowpass filter prior to decimation in order to represent a signal correctly with a reduced number of samples.
Multiscale Image Segmentation by Integrated Edge and Region Detection
 IEEE Trans. Image Processing
, 1997
"... AbstractThis paper describes a new transform to extract image regions at all geometric and photometric scales. It is argued that linear approaches such as convolution and matching have the fundamental shortcoming that they require a priori models of region shape. The proposed transform avoids this ..."
Abstract

Cited by 96 (31 self)
 Add to MetaCart
AbstractThis paper describes a new transform to extract image regions at all geometric and photometric scales. It is argued that linear approaches such as convolution and matching have the fundamental shortcoming that they require a priori models of region shape. The proposed transform avoids this limitation by letting the structure emerge, bottomup, from interactions among pixels, in analogy with statistical mechanics and particle physics. The transform involves global computations on pairs of pixels followed by vector integration of the results, rather than scalar and local linear processing. An attraction force field is computed over the image in which pixels belonging to the same region are mutually attracted and the region is characterized by a convergent flow. It is shown that the kansform possesses properties that allow multiscale segmentation, or extraction of original, unblurred structure at all different geometric and photometric scales present in the image. This is in contrast with much of the previous work wherein multiscale structure is viewed as the smoothed structure in a multiscale decimation of image signal. Scale is an integral parameter of the force (computation, and the number and values of scale parameters associated with the image can be estimated automatically. Regions are detected at all, a priori unknown, scales resulting in automatic construction of a segmentation tree, in which each pixel is annotated with descriptions of all the regions it belongs to. Although some of the analytical properties of the transform are presented for piecewise constant images, it is shown that the results hold for more general images, e.g., those containing noise and shading. Thus the proposed method is intended as a solution to the problem of multiscale, integraled edge and region detection, or lowlevel image segmentation. Experimental results with synthetic and real images are given to demonstrate the properties and segmentation performance of the transform.
Edge Detection Techniques  An Overview
 INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND IMAGE ANALYSIS
, 1998
"... In computer vision and image processing, edge detection concerns the localization of significant variations of the grey level image and the identification of the physical phenomena that originated them. This information is very useful for applications in 3D reconstruction, motion, recognition, image ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
In computer vision and image processing, edge detection concerns the localization of significant variations of the grey level image and the identification of the physical phenomena that originated them. This information is very useful for applications in 3D reconstruction, motion, recognition, image enhancement and restoration, image registration, image compression, and so on. Usually, edge detection requires smoothing and differentiation of the image. Differentiation is an illconditioned problem and smoothing results in a loss of information. It is difficult to design a general edge detection algorithm which performs well in many contexts and captures the requirements of subsequent processing stages. Consequently, over the history of digital image processing a variety of edge detectors have been devised which differ in their mathematical and algorithmic properties. This paper is an account of the current state of our understanding of edge detection. We propose an overview of research...
Parametric Feature Detection
, 1998
"... Most visual features are parametric in nature, including, edges, lines, corners, and junctions. We propose an algorithm to automatically construct detectors for arbitrary parametric features. To maximize robustness we use realistic multiparameter feature models and incorporate optical and sensing ..."
Abstract

Cited by 73 (15 self)
 Add to MetaCart
Most visual features are parametric in nature, including, edges, lines, corners, and junctions. We propose an algorithm to automatically construct detectors for arbitrary parametric features. To maximize robustness we use realistic multiparameter feature models and incorporate optical and sensing effects. Each feature is represented as a densely sampled parametric manifold in a low dimensional subspace of a Hilbert space. During detection, the vector of intensity values in a window about each pixel in the image is projected into the subspace. If the projection lies sufficiently close to the feature manifold, the feature is detected and the location of the closest manifold point yields the feature parameters. The concepts of parameter reduction by normalization, dimension reduction, pattern rejection, and heuristic search are all employed to achieve the required efficiency. Detectors have been constructed for five features, namely, step edge (five parameters), roof edge (five parameters), line (six parameters), corner (five parameters), and circular disc (six parameters). The results of detailed experiments are presented which demonstrate the robustness of feature detection and the accuracy of parameter estimation.
Finding corners
 Image and Vision Computing Journal
, 1988
"... Many important image cues such as 'T','X' and 'L' junctions have a local twodimensional structure. Conventional edge detectors are designed for onedimensional 'events'. Even the best edge operators can not reliably detect these twodimensional features. This contribution proposes a solution to ..."
Abstract

Cited by 60 (0 self)
 Add to MetaCart
Many important image cues such as 'T','X' and 'L' junctions have a local twodimensional structure. Conventional edge detectors are designed for onedimensional 'events'. Even the best edge operators can not reliably detect these twodimensional features. This contribution proposes a solution to the twodimensional problem. In this paper, I address the following: • 'L'junction detection. Previous attempts, relying on the second differentials of the image surface have essentially measured image curvature. Recently Harris [Harris 87] implemented a 'corner ' detector that is based only on first differentials. I provide a mathematical proof to explain how this algorithm estimates image curvature. Although this algorithm will isolate image 'L'junctions, its performance cannot be predicted for T'junctions and other higher order image structures. • Instead, an image representation is proposed that exploits the richness of the local differential geometrical 'topography ' of the intensity surface. Theoretical and experimental results are presented which demonstrate how idealised instances of twodimensional surface features such as junctions can be characterised by the differential geometry of a simple facet model. • Preliminary results are very encouraging. Current studies are concerned with the extension to real data. I am investigating statistical noise models to provide a measure of 'confidence' in the geometric labelling. The richness and sparseness of a twodimensional structure can be exploited in many highlevel vision processes. I intend to use my representation to explore some of these fields in future work.
An Integrated ColorSpatial Approach to Contentbased Image Retrieval
, 1995
"... The use of color information for image retrieval has been used widely in many contentbasedretrieval system with some success. However, histogrambased color retrieval techniques su#er from a lack of important spatial knowledge. We discuss a technique of integrating color information with spatial kn ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
The use of color information for image retrieval has been used widely in many contentbasedretrieval system with some success. However, histogrambased color retrieval techniques su#er from a lack of important spatial knowledge. We discuss a technique of integrating color information with spatial knowledge to obtain an overall impression of the image. The technique involves three steps: the selection of a set of representative colors, the analysis of spatial information of the selectedcolors, and the retrieval process based on the integratedcolorspatial information. Two color histograms are used to aid in the process of color selection. After deriving the set of representative colors, spatial knowledge of the selectedcolors is obtained using a maximum entropy discretization with event covering method. Aretrieval process is formulated to make use of the spatial knowledge to retrieve relevant images. A prototype image retrieval system has been implemented on the Unix system. It is tested on an image database consisting of 260 images. The result shows substantial improvement over the histogrambasedcolor retrieval methods.