Results 11  20
of
1,870
Edge Detection and Ridge Detection with Automatic Scale Selection
 CVPR'96
, 1996
"... When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of ..."
Abstract

Cited by 344 (24 self)
 Add to MetaCart
(Show Context)
When extracting features from image data, the type of information that can be extracted may be strongly dependent on the scales at which the feature detectors are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting onedimensional features, such as edges and ridges. Anovel concept of a scalespace edge is introduced, defined as a connected set of points in scalespace at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important property of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analysed in detail. It is shown that by expressing these in terms of &gamma;normalized derivatives, an immediate consequence of this definition is that fine scales are selected for sharp edges (so as to reduce the shape distortions due to scalespace smoothing), whereas coarse scales are selected for diffuse edges, such that an edge model constitutes a valid abstraction of the intensity profile across the edge. With slight modifications, this idea can be used for formulating a ridge detector with automatic scale selection, having the characteristic property that the selected scales on a scalespace ridge instead reflect the width of the ridge.
A Riemannian Framework for Tensor Computing
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2006
"... Positive definite symmetric matrices (socalled tensors in this article) are nowadays a common source of geometric information. In this paper, we propose to provide the tensor space with an affineinvariant Riemannian metric. We demonstrate that it leads to strong theoretical properties: the cone of ..."
Abstract

Cited by 282 (27 self)
 Add to MetaCart
(Show Context)
Positive definite symmetric matrices (socalled tensors in this article) are nowadays a common source of geometric information. In this paper, we propose to provide the tensor space with an affineinvariant Riemannian metric. We demonstrate that it leads to strong theoretical properties: the cone of positive definite symmetric matrices is replaced by a regular manifold of constant curvature without boundaries (null eigenvalues are at the infinity), the geodesic between two tensors and the mean of a set of tensors are uniquely defined, etc. We have
On the Unification Line Processes, Outlier Rejection, and Robust Statistics with Applications in Early Vision
, 1996
"... The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent ..."
Abstract

Cited by 273 (9 self)
 Add to MetaCart
The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent interest in the use of robust statistical techniques to account for discontinuities. This paper unifies the two approaches. To achieve this we generalize the notion of a "line process" to that of an analog "outlier process" and show how a problem formulated in terms of outlier processes can be viewed in terms of robust statistics. We also characterize a class of robust statistical problems for which an equivalent outlierprocess formulation exists and give a straightforward method for converting a robust estimation problem into an outlierprocess formulation. We show how prior assumptions about the spatial structure of outliers can be expressed as constraints on the recovered analog outlier processes and how traditional continuation methods can be extended to the explicit outlierprocess formulation. These results indicate that the outlierprocess approach provides a general framework which subsumes the traditional lineprocess approaches as well as a wide class of robust estimation problems. Examples in surface reconstruction, image segmentation, and optical flow are presented to illustrate the use of outlier processes and to show how the relationship between outlier processes and robust statistics can be exploited. An appendix provides a catalog of common robust error norms and their equivalent outlierprocess formulations.
Gradient flows and geometric active contour models
 in Proc. of the 5th International Conference on Computer Vision
, 1995
"... In this paper, we analyze the geometric active contour models discussed in [6, 181 from a curve evolution point of view and propose some modifications based on gradient flows relative to certain new featurebased Riemannian metrics. This leads to a novel snake paradigm in which the feature of interes ..."
Abstract

Cited by 239 (18 self)
 Add to MetaCart
In this paper, we analyze the geometric active contour models discussed in [6, 181 from a curve evolution point of view and propose some modifications based on gradient flows relative to certain new featurebased Riemannian metrics. This leads to a novel snake paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus the snake is attracted very naturally and eficiently to the desired feature. Moreover, we consider some 30 active surface models based on these ideas. 1
A geometrical framework for low level vision
 IEEE Trans. on Image Processing
, 1998
"... Abstract—We introduce a new geometrical framework based on which natural flows for image scale space and enhancement are presented. We consider intensity images as surfaces in the space. The image is, thereby, a twodimensional (2D) surface in threedimensional (3D) space for graylevel images, an ..."
Abstract

Cited by 223 (35 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a new geometrical framework based on which natural flows for image scale space and enhancement are presented. We consider intensity images as surfaces in the space. The image is, thereby, a twodimensional (2D) surface in threedimensional (3D) space for graylevel images, and 2D surfaces in five dimensions for color images. The new formulation unifies many classical schemes and algorithms via a simple scaling of the intensity contrast, and results in new and efficient schemes. Extensions to multidimensional signals become natural and lead to powerful denoising and scale space algorithms. Index Terms — Color image processing, image enhancement, image smoothing, nonlinear image diffusion, scalespace. I.
Simultaneous Structure and Texture Image Inpainting
, 2003
"... An algorithm for the simultaneous fillingin of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these function ..."
Abstract

Cited by 220 (13 self)
 Add to MetaCart
(Show Context)
An algorithm for the simultaneous fillingin of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture fillingin algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filledin with texture synthesis techniques. The original image is then reconstructed adding back these two subimages. The novel contribution of this paper is then in the combination of these three previously developed components, image decomposition with inpainting and texture synthesis, which permits the simultaneous use of fillingin algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach.
An ImageBased Approach to ThreeDimensional Computer Graphics
, 1997
"... The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the s ..."
Abstract

Cited by 206 (6 self)
 Add to MetaCart
The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the simulation process with data interpolation. I derive an imagewarping equation that maps the visible points in a reference image to their correct positions in any desired view. This mapping from reference image to desired image is determined by the centerofprojection and pinholecamera model of the two images and by a generalized disparity value associated with each point in the reference image. This generalized disparity value, which represents the structure of the scene, can be determined from point correspondences between multiple reference images. The imagewarping equation alone is insufficient to synthesize desired images because multiple referenceimage points may map to a single point. I derive a new visibility algorithm that determines a drawing order for the image warp. This algorithm results in correct visibility for the desired image independent of the reference image’s contents. The utility of the imagebased approach can be enhanced with a more general pinholecamera
Brook for GPUs: Stream Computing on Graphics Hardware
 ACM TRANSACTIONS ON GRAPHICS
, 2004
"... In this paper, we present Brook for GPUs, a system for generalpurpose computation on programmable graphics hardware. Brook extends C to include simple dataparallel constructs, enabling the use of the GPU as a streaming coprocessor. We present a compiler and runtime system that abstracts and virtua ..."
Abstract

Cited by 204 (9 self)
 Add to MetaCart
In this paper, we present Brook for GPUs, a system for generalpurpose computation on programmable graphics hardware. Brook extends C to include simple dataparallel constructs, enabling the use of the GPU as a streaming coprocessor. We present a compiler and runtime system that abstracts and virtualizes many aspects of graphics hardware. In addition, we present an analysis of the effectiveness of the GPU as a compute engine compared to the CPU, to determine when the GPU can outperform the CPU for a particular algorithm. We evaluate our system with five applications, the SAXPY and SGEMV BLAS operators, image segmentation, FFT, and ray tracing. For these applications, we demonstrate that our Brook implementations perform comparably to handwritten GPU code and up to seven times faster than their CPU counterparts.
Mean Shift Analysis and Applications
, 1999
"... A nonparametric estimator of density gradient, the mean shift, is employed in the joint, spatialrange (value) domain of gray level and color images for discontinuity preserving filtering and image segmentation. Properties of the mean shift are reviewed and its convergence on lattices is proven. The ..."
Abstract

Cited by 200 (8 self)
 Add to MetaCart
(Show Context)
A nonparametric estimator of density gradient, the mean shift, is employed in the joint, spatialrange (value) domain of gray level and color images for discontinuity preserving filtering and image segmentation. Properties of the mean shift are reviewed and its convergence on lattices is proven. The proposed filtering method associates with each pixel in the image the closest local mode in the density distribution of the joint domain. Segmentation into a piecewise constant structure requires only one more step, fusion of the regions associated with nearby modes. The proposed technique has two parameters controlling the resolution in the spatial and range domains. Since convergence is guaranteed, the technique does not require the intervention of the user to stop the filtering at the desired image quality. Several examples, for gray and color images, show the versatilityofthe method and compare favorably with results described in the literature for the same images.
Object Bank: A HighLevel Image Representation for Scene Classification & Semantic Feature Sparsification
"... Robust lowlevel image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such lowlevel image r ..."
Abstract

Cited by 198 (6 self)
 Add to MetaCart
Robust lowlevel image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such lowlevel image representations are potentially not enough. In this paper, we propose a highlevel image representation, called the Object Bank, where an image is represented as a scaleinvariant response map of a large number of pretrained generic object detectors, blind to the testing dataset or visual task. Leveraging on the Object Bank representation, superior performances on high level visual recognition tasks can be achieved with simple offtheshelf classifiers such as logistic regression and linear SVM. Sparsity algorithms make our representation more efficient and scalable for large scene datasets, and reveal semantically meaningful feature patterns. 1