Results 1  10
of
193
Spacetime Interest Points
 IN ICCV
, 2003
"... Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we propose to extend the notion of spatial interest points into the spatiotemporal domain and show how the resulting features often reflect interesting events that can be use ..."
Abstract

Cited by 474 (14 self)
 Add to MetaCart
Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we propose to extend the notion of spatial interest points into the spatiotemporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for its interpretation.. To detect
Blobworld: Image segmentation using ExpectationMaximization and its application to image querying
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1999
"... Retrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation which provides a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture. This "B ..."
Abstract

Cited by 360 (10 self)
 Add to MetaCart
(Show Context)
Retrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation which provides a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture. This "Blobworld" representation is created by clustering pixels in a joint colortextureposition feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions whi...
The Computation of Optical Flow
, 1995
"... Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image dis ..."
Abstract

Cited by 235 (10 self)
 Add to MetaCart
Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field. Provided that optical flow is a reliable approximation to twodimensional image motion, it may then be used to recover the threedimensional motion of the visual sensor (to within a scale factor) and the threedimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the threedimensional environment and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, timetocollision and focus of expansion calculations, motion compensated encoding and stereo disparity measurement. We investiga...
Filterbankbased fingerprint matching
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2000
"... With identity fraud in our society reaching unprecedented proportions and with an increasing emphasis on the emerging automatic personal identification applications, biometricsbased verification, especially fingerprintbased identification, is receiving a lot of attention. There are two major shor ..."
Abstract

Cited by 160 (26 self)
 Add to MetaCart
With identity fraud in our society reaching unprecedented proportions and with an increasing emphasis on the emerging automatic personal identification applications, biometricsbased verification, especially fingerprintbased identification, is receiving a lot of attention. There are two major shortcomings of the traditional approaches to fingerprint representation. For a considerable fraction of population, the representations based on explicit detection of complete ridge structures in the fingerprint are difficult to extract automatically. The widely used minutiaebased representation does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Further, minutiaebased matching has difficulty in quickly matching two fingerprint images containing different number of unregistered minutiae points. The proposed filterbased algorithm uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode. The fingerprint matching is based on the Euclidean distance between the two corresponding FingerCodes and hence is extremely fast. We are able to achieve a verification accuracy which is only marginally inferior to the best results of minutiaebased algorithms published in the open literature [1]. Our system performs better than a stateoftheart minutiaebased system when the performance requirement of the application system does not demand a very low false acceptance rate. Finally, we show that the matching performance can be improved by combining the decisions of the matchers based on complementary (minutiaebased and filterbased) fingerprint information.
Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods
 International Journal of Computer Vision
, 2005
"... Abstract. Differential methods belong to the most widely used techniques for optic flow computation in image sequences. They can be classified into local methods such as the Lucas–Kanade technique or Bigün’s structure tensor method, and into global methods such as the Horn/Schunck approach and its e ..."
Abstract

Cited by 157 (13 self)
 Add to MetaCart
Abstract. Differential methods belong to the most widely used techniques for optic flow computation in image sequences. They can be classified into local methods such as the Lucas–Kanade technique or Bigün’s structure tensor method, and into global methods such as the Horn/Schunck approach and its extensions. Often local methods are more robust under noise, while global techniques yield dense flow fields. The goal of this paper is to contribute to a better understanding and the design of novel differential methods in four ways: (i) We juxtapose the role of smoothing/regularisation processes that are required in local and global differential methods for optic flow computation. (ii) This discussion motivates us to describe and evaluate a novel method that combines important advantages of local and global approaches: It yields dense flow fields that are robust against noise. (iii) Spatiotemporal and nonlinear extensions as well as multiresolution frameworks are presented for this hybrid method. (iv) We propose a simple confidence measure for optic flow methods that minimise energy functionals. It allows to sparsify a dense flow field gradually, depending on the reliability required for the resulting flow. Comparisons with experiments from the literature demonstrate the favourable performance of the proposed methods and the confidence measure.
Orientation Diffusions
 IEEE Trans. Image Processing
, 1998
"... Abstract—Diffusions are useful for image processing and computer vision because they provide a convenient way of smoothing noisy data, analyzing images at multiple scales, and enhancing discontinuities. A number of diffusions of image brightness have been defined and studied so far; they may be appl ..."
Abstract

Cited by 134 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Diffusions are useful for image processing and computer vision because they provide a convenient way of smoothing noisy data, analyzing images at multiple scales, and enhancing discontinuities. A number of diffusions of image brightness have been defined and studied so far; they may be applied to scalar and vectorvalued quantities that are naturally associated with intervals of either the real line, or other flat manifolds. Some quantities of interest in computer vision, and other areas of engineering that deal with images, are defined on curved manifolds; typical examples are orientation and hue that are defined on the circle. Generalizing brightness diffusions to orientation is not straightforward, especially in the case where a discrete implementation is sought. An example of what may go wrong is presented. A method is proposed to define diffusions of orientationlike quantities. First a definition in the continuum is discussed, then a discrete orientation diffusion is proposed. The behavior of such diffusions is explored both analytically and experimentally. It is shown how such orientation diffusions contain a nonlinearity that is reminiscent of edgeprocess and anisotropic diffusion. A number of open questions are proposed at the end. Index Terms—Orientation analysis, texture analysis, diffusions, scalespace.
A review of statistical approaches to level set segmentation: Integrating color, texture, motion and shape
 International Journal of Computer Vision
, 2007
"... Abstract. Since their introduction as a means of front propagation and their first application to edgebased segmentation in the early 90’s, level set methods have become increasingly popular as a general framework for image segmentation. In this paper, we present a survey of a specific class of reg ..."
Abstract

Cited by 93 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Since their introduction as a means of front propagation and their first application to edgebased segmentation in the early 90’s, level set methods have become increasingly popular as a general framework for image segmentation. In this paper, we present a survey of a specific class of regionbased level set segmentation methods and clarify how they can all be derived from a common statistical framework. Regionbased segmentation schemes aim at partitioning the image domain by progressively fitting statistical models to the intensity, color, texture or motion in each of a set of regions. In contrast to edgebased schemes such as the classical Snakes, regionbased methods tend to be less sensitive to noise. For typical images, the respective cost functionals tend to have less local minima which makes them particularly wellsuited for local optimization methods such as the level set method. We detail a general statistical formulation for level set segmentation. Subsequently, we clarify how the integration of various low level criteria leads to a set of cost functionals and point out relations between the different segmentation schemes. In experimental results, we demonstrate how the level set function is driven to partition the image plane into domains of coherent color, texture, dynamic texture or motion. Moreover, the Bayesian formulation allows to introduce prior shape knowledge into the level set method. We briefly review a number of advances in this domain.
Direct Computation of Shape Cues Using ScaleAdapted Spatial Derivative Operators
 International Journal of Computer Vision
, 1996
"... This paper addresses the problem of computing cues to the threedimensional structure of surfaces in the world directly from the local structure of the brightness pattern of either a single monocular image or a binocular image pair. It is shown that starting from Gaussian derivatives of order up to ..."
Abstract

Cited by 59 (9 self)
 Add to MetaCart
This paper addresses the problem of computing cues to the threedimensional structure of surfaces in the world directly from the local structure of the brightness pattern of either a single monocular image or a binocular image pair. It is shown that starting from Gaussian derivatives of order up to two at a range of scales in scalespace, local estimates of (i) surface orientation from monocular texture foreshortening, (ii) surface orientation from monocular texture gradients, and (iii) surface orientation from the binocular disparity gradient can be computed without iteration or search, and by using essentially the same basic mechanism. The methodology is based on a multiscale descriptor of image structure called the windowed second moment matrix, which is computed with adaptive selection of both scale levels and spatial positions. Notably, this descriptor comprises two scale parameters; a local scale parameter describing the amount of smoothing used in derivative computations, and a...
Shapeadapted smoothing in estimation of 3D shape cues from affine distortions of local 2D brightness structure
, 2001
"... This article describes a method for reducing the shape distortions due to scalespace smoothing that arise in the computation of 3D shape cues using operators (derivatives) de ned from scalespace representation. More precisely, we are concerned with a general class of methods for deriving 3D shap ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
This article describes a method for reducing the shape distortions due to scalespace smoothing that arise in the computation of 3D shape cues using operators (derivatives) de ned from scalespace representation. More precisely, we are concerned with a general class of methods for deriving 3D shape cues from 2D image data based on the estimation of locally linearized deformations of brightness patterns. This class
Motion competition: a variational approach to piecewise parametric motion segmentation
 Int. J. Comput. Vision
, 2005
"... Abstract. We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatiotemporal image gradient, given a particular veloci ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatiotemporal image gradient, given a particular velocity model, and on a geometric prior on the estimated motion field favoring motion boundaries of minimal length. Exploiting the Bayesian framework, we derive a cost functional which depends on parametric motion models for each of a set of regions and on the boundary separating these regions. The resulting functional can be interpreted as an extension of the MumfordShah functional from intensity segmentation to motion segmentation. In contrast to most alternative approaches, the problems of segmentation and motion estimation are jointly solved by continuous minimization of a single functional. Minimizing this functional with respect to its dynamic variables results in an eigenvalue problem for the motion parameters and in a gradient descent evolution for the motion discontinuity set. We propose two different representations of this motion boundary: an explicit splinebased implementation which can be applied to the motionbased tracking of a single moving object, and an implicit multiphase level set implementation which allows for the segmentation of an arbitrary number of multiply connected moving objects. Numerical results both for simulated ground truth experiments and for realworld sequences demonstrate the capacity of our approach to segment objects based exclusively on their relative motion.