Results 1  10
of
182
Snakes: Active contour models
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 1988
"... A snake is an energyminimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scalespace continuation can be used to enlarge ..."
Abstract

Cited by 3900 (17 self)
 Add to MetaCart
(Show Context)
A snake is an energyminimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scalespace continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which userimposed constraint forces guide the snake near features of interest.
Singularity Detection And Processing With Wavelets
 IEEE Transactions on Information Theory
, 1992
"... Most of a signal information is often found in irregular structures and transient phenomena. We review the mathematical characterization of singularities with Lipschitz exponents. The main theorems that estimate local Lipschitz exponents of functions, from the evolution across scales of their wavele ..."
Abstract

Cited by 590 (13 self)
 Add to MetaCart
(Show Context)
Most of a signal information is often found in irregular structures and transient phenomena. We review the mathematical characterization of singularities with Lipschitz exponents. The main theorems that estimate local Lipschitz exponents of functions, from the evolution across scales of their wavelet transform are explained. We then prove that the local maxima of a wavelet transform detect the location of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations have a different behavior that we study separately. We show that the size of the oscillations can be measured from the wavelet transform local maxima. It has been shown that one and twodimensional signals can be reconstructed from the local maxima of their wavelet transform [14]. As an application, we develop an algorithm that removes white noises by discriminating the noise and the signal singularities through an analysis of their ...
Multifrequency channel decompositions of images and wavelet models
 IEE Transactions on Acoustics, Speech And Signal Processing
, 1989
"... AbstractIn this paper we review recent multichannel models developed in psychophysiology, computer vision, and image processing. In psychophysiology, multichannel models have been particularly successful in explaining some lowlevel processing in the visual cortex. The expansion of a function int ..."
Abstract

Cited by 339 (0 self)
 Add to MetaCart
(Show Context)
AbstractIn this paper we review recent multichannel models developed in psychophysiology, computer vision, and image processing. In psychophysiology, multichannel models have been particularly successful in explaining some lowlevel processing in the visual cortex. The expansion of a function into several frequency channels provides a representation which is intermediate between a spatial and a Fourier representation. We describe the mathematical properties of such decompositions and introduce the wavelet transform. We review the classical multiresolution pyramidal transforms developed in computer vision and show how they relate to the decomposition of an image into a wavelet orthonormal basis. In the last section we discuss the properties of the zero crossings of multifrequency channels. Zerocrossings representations are particularly well adapted for pattern recognition in computer vision. I.
Constructing Simple Stable Descriptions for Image Partitioning
, 1994
"... A new formulation of the image partitioning problem is presented: construct a complete and stable description of an image, in terms of a specified descriptive language, that is simplest in the sense of being shortest. We show that a descriptive language limited to a loworder polynomial description ..."
Abstract

Cited by 270 (5 self)
 Add to MetaCart
A new formulation of the image partitioning problem is presented: construct a complete and stable description of an image, in terms of a specified descriptive language, that is simplest in the sense of being shortest. We show that a descriptive language limited to a loworder polynomial description of the intensity variation within each region and a chaincodelike description of the region boundaries yields intuitively satisfying partitions for a wide class of images. The advantage of this formulation is that it can be extended to deal with subsequent steps of the imageunderstanding problem (or to deal with other image attributes, such as texture) in a natural way by augmenting the descriptive language. Experiments performed on a variety of both real and synthetic images demonstrate the superior performance of this approach over partitioning techniques based on clustering vectors of local image attributes and standard edgedetection techniques. 1 Introduction The partitioning proble...
Scalebased description and recognition of planar curves and twodimensional shapes
, 1986
"... The problem of finding a description, at varying levels of detail, for planar curves and matching two such descriptions is posed and solved in this paper. A number of necessary criteria are imposed on any candidate solution method. Pathbased Gaussian smoothing techniques are applied to the curve to ..."
Abstract

Cited by 213 (3 self)
 Add to MetaCart
The problem of finding a description, at varying levels of detail, for planar curves and matching two such descriptions is posed and solved in this paper. A number of necessary criteria are imposed on any candidate solution method. Pathbased Gaussian smoothing techniques are applied to the curve to find zeros of curvature at varying levels of detail. The result is the "generalized scale space " image of a planar curve which is invariant under rotation, uniform scaling and translation of the curve. These properties make the scale space image suitable for matching. The matching algorithm is a modification of the uniform cost algorithm and finds the lowest cost match of contours in the scale space images. It is argued that this is preferable to matching in a socalled stable scale of the curve because no such scale may exist for a given curve. This technique is applied to register a Landsat satellite image of the Strait of Georgia, B.C. (manually corrected for skew) to a map containing the shorelines of an overlapping area.
Local scale control for edge detection and blur estimation
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1998
"... Abstract—The standard approach to edge detection is based on a model of edges as large step changes in intensity. This approach fails to reliably detect and localize edges in natural images where blur scale and contrast can vary over a broad range. The main problem is that the appropriate spatial sc ..."
Abstract

Cited by 165 (20 self)
 Add to MetaCart
(Show Context)
Abstract—The standard approach to edge detection is based on a model of edges as large step changes in intensity. This approach fails to reliably detect and localize edges in natural images where blur scale and contrast can vary over a broad range. The main problem is that the appropriate spatial scale for local estimation depends upon the local structure of the edge, and thus varies unpredictably over the image. Here we show that knowledge of sensor properties and operator norms can be exploited to define a unique, locally computable minimum reliable scale for local estimation at each point in the image. This method for local scale control is applied to the problem of detecting and localizing edges in images with shallow depth of field and shadows. We show that edges spanning a broad range of blur scales and contrasts can be recovered accurately by a single system with no input parameters other than the second moment of the sensor noise. A natural dividend of this approach is a measure of the thickness of contours which can be used to estimate focal and penumbral blur. Local scale control is shown to be important for the estimation of blur in complex images, where the potential for interference between nearby edges of very different blur scale requires that estimates be made at the minimum reliable scale.
A Transform for Multiscale Image Segmentation by Integrated Edge and Region Detection
 IEEE TRANS. IMAGE PROCESSING
, 1996
"... This paper describes a new transform to extract image regions at all geometric and photometric scales. It is argued that linear approaches such as convolution and matching have the fundamental shortcoming that they require a priori models of region shape. The proposed transform avoids this limitati ..."
Abstract

Cited by 120 (32 self)
 Add to MetaCart
This paper describes a new transform to extract image regions at all geometric and photometric scales. It is argued that linear approaches such as convolution and matching have the fundamental shortcoming that they require a priori models of region shape. The proposed transform avoids this limitation by letting the structure emerge, bottomup, from interactions among pixels, in analogy with statistical mechanics and particle physics. The transform involves global computations on pairs of pixels followed by vector integration of the results, rather than scalar and local linear processing. An attraction force field is computed over the image in which pixels belonging to the same region are mutually attracted and the region is characterized by a convergent flow. It is shown that the kansform possesses properties that allow multiscale segmentation, or extraction of original, unblurred structure at all different geometric and photometric scales present in the image. This is in contrast with much of the previous work wherein multiscale structure is viewed as the smoothed structure in a multiscale decimation of image signal. Scale is an integral parameter of the force (computation, and the number and values of scale parameters associated with the image can be estimated automatically. Regions are detected at all, a priori unknown, scales resulting in automatic construction of a segmentation tree, in which each pixel is annotated with descriptions of all the regions it belongs to. Although some of the analytical properties of the transform are presented for piecewise constant images, it is shown that the results hold for more general images, e.g., those containing noise and shading. Thus the proposed method is intended as a solution to the problem of multiscale, integraled edge and region detection, or lowlevel image segmentation. Experimental results with synthetic and real images are given to demonstrate the properties and segmentation performance of the transform.
A Stochastic Grammar of Images
 Foundations and Trends in Computer Graphics and Vision
, 2006
"... This exploratory paper quests for a stochastic and context sensitive grammar of images. The grammar should achieve the following four objectives and thus serves as a unified framework of representation, learning, and recognition for a large number of object categories. (i) The grammar represents bot ..."
Abstract

Cited by 117 (26 self)
 Add to MetaCart
(Show Context)
This exploratory paper quests for a stochastic and context sensitive grammar of images. The grammar should achieve the following four objectives and thus serves as a unified framework of representation, learning, and recognition for a large number of object categories. (i) The grammar represents both the hierarchical decompositions from scenes, to objects, parts, primitives and pixels by terminal and nonterminal nodes and the contexts for spatial and functional relations by horizontal links between the nodes. It formulates each object category as the set of all possible valid configurations produced by the grammar. (ii) The grammar is embodied in a simple And–Or graph representation where each Ornode points to alternative subconfigurations and an Andnode is decomposed into a number of components. This representation supports recursive topdown/bottomup procedures for image parsing under the Bayesian framework and make it convenient to scale
FeatureBased Human Face Detection
 IMAGE AND VISION COMPUTING
, 1996
"... Human face detection has always been an important problem for face, expression and gesture recognition. Though numerous attempts have been made to detect and localize faces, these approaches have made assumptions that restrict their extension to more general cases. We identify that the key factor in ..."
Abstract

Cited by 115 (3 self)
 Add to MetaCart
Human face detection has always been an important problem for face, expression and gesture recognition. Though numerous attempts have been made to detect and localize faces, these approaches have made assumptions that restrict their extension to more general cases. We identify that the key factor in a generic and robust system is that of using a large amount of image evidence, related and reinforced by model knowledge through a probabilistic framework. In this paper, we propose a featurebased algorithm for detecting faces that is sufficiently generic and is also easily extensible to cope with more demanding variations of the imaging conditions. The algorithm detects feature points from the image using spatial filters and groups them into face candidates using geometric and gray level constraints. A probabilistic framework is then used to reinforce probabilities and to evaluate the likelihood of the candidate as a face. We provide results to support the validity of the approach and demo...
Multidimensional indexing for recognizing visual shapes
 PAMI
, 1994
"... AbstractThis paper introduces an analytical framework for studying some properties of model acquisition and recognition techniques based on indexing. The goal is to demonstrate that several problems previously associated with the approach can be attributed to the low dimensionality of invariants us ..."
Abstract

Cited by 87 (0 self)
 Add to MetaCart
AbstractThis paper introduces an analytical framework for studying some properties of model acquisition and recognition techniques based on indexing. The goal is to demonstrate that several problems previously associated with the approach can be attributed to the low dimensionality of invariants used. These include limited index selectivity, excessive accumulation of votes in the lookup table buckets, and excessive sensitivity to quantization parameters. Theoretical results demonstrate that using highdimensional, highly descriptive global invariants produces better results in terms of accuracy, false positive suppression, and computation time. A practical example of highdimensional global invariants is introduced and used to implement a 2D shape acquisitionhecognition system. The acquisitiodrecognition system is based on a twostep table lookup mechanism. First, local curve descriptors are obtained by correlating image contour information at short range. Then, sevendimensional global invariants are computed by correlating triplets of local curve descriptors at longer range. This experimental system is meant to illustrate the behavior of a highdimensional indexing scheme. Indeed, its performance shows good agreement with the analytical model with respect to database size, fault tolerance, and recognition speed. Model acquisition time is linear to cubic in the number of object features. Object recognition time is constant to linear in the number of models in the database and linear to cubic in the number of features in the image. The system has been tested extensively, with more than 250 arbitrary shapes in the database. Unsupervised shape and subpart acquisition is demonstrated. I.