Results 1  10
of
175
Color indexing
 International Journal of Computer Vision
, 1991
"... Computer vision is embracing a new research focus in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, realistic environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. ..."
Abstract

Cited by 1324 (24 self)
 Add to MetaCart
Computer vision is embracing a new research focus in which the aim is to develop visual skills for robots that allow them to interact with a dynamic, realistic environment. To achieve this aim, new kinds of vision algorithms need to be developed which run in real time and subserve the robot's goals. Two fundamental goals are determining the location of a known object. Color can be successfully used for both tasks. This article demonstrates that color histograms of multicolored objects provide a robust, efficient cue for indexing into a large database of models. It shows that color histograms are stable object representations in the presence of occlusion and over change in view, and that they can differentiate among a large number of objects. For solving the identification problem, it introduces a technique called Histogram Intersection, which matches model and image histograms and a fast incremental version of Histogram Intersection, which allows realtime indexing into a large database of stored models. For solving the location problem it introduces an algorithm called Histogram Backprojection, which performs this task efficiently in crowded scenes. 1
Threedimensional object recognition from single twodimensional images
 Artificial Intelligence
, 1987
"... A computer vision system has been implemented that can recognize threedimensional objects from unknown viewpoints in single grayscale images. Unlike most other approaches, the recognition is accomplished without any attempt to reconstruct depth information bottomup from the visual input. Instead, ..."
Abstract

Cited by 384 (7 self)
 Add to MetaCart
A computer vision system has been implemented that can recognize threedimensional objects from unknown viewpoints in single grayscale images. Unlike most other approaches, the recognition is accomplished without any attempt to reconstruct depth information bottomup from the visual input. Instead, three other mechanisms are used that can bridge the gap between the twodimensional image and knowledge of threedimensional objects. First, a process of perceptual organization is used to form groupings and structures in the image that are likely to be invariant over a wide range of viewpoints. Second, a probabilistic ranking method is used to reduce the size of the search space during model based matching. Finally, a process of spatial correspondence brings the projections of threedimensional models into direct correspondence with the image by solving for unknown viewpoint and model parameters. A high level of robustness in the presence of occlusion and missing data can be achieved through full application of a viewpoint consistency constraint. It is argued that similar mechanisms and constraints form the basis for recognition in human vision. This paper has been published in Artificial Intelligence, 31, 3 (March 1987), pp. 355–395. 1 1
A Survey of Shape Analysis Techniques
 Pattern Recognition
, 1998
"... This paper provides a review of shape analysis methods. Shape analysis methods play an important role in systems for object recognition, matching, registration, and analysis. Researchin shape analysis has been motivated, in part, by studies of human visual form perception systems. ..."
Abstract

Cited by 200 (2 self)
 Add to MetaCart
This paper provides a review of shape analysis methods. Shape analysis methods play an important role in systems for object recognition, matching, registration, and analysis. Researchin shape analysis has been motivated, in part, by studies of human visual form perception systems.
Stochastic Completion Fields: A Neural Model of Illusory Contour Shape and Salience
 Neural Computation
, 1995
"... We describe an algorithm and representation level theory of illusory contour shape and salience. Unlike previous theories, our model is derived from a single assumption namely, that the prior probability distribution of boundary completion shape can be modeled by a random walk in a lattice whose ..."
Abstract

Cited by 177 (14 self)
 Add to MetaCart
We describe an algorithm and representation level theory of illusory contour shape and salience. Unlike previous theories, our model is derived from a single assumption namely, that the prior probability distribution of boundary completion shape can be modeled by a random walk in a lattice whose points are positions and orientations in the image plane (i.e., the space which one can reasonably assume is represented by neurons of the mammalian visual cortex). Our model does not employ numerical relaxation or other explicit minimization, but instead relies on the fact that the probability that a particle following a random walk will pass through a given position and orientation on a path joining two boundary fragments can be computed directly as the product of two vectorfield convolutions. We show that for the random walk we define, the maximum likelihood paths are curves of least energy, that is, on average, random walks follow paths commonly assumed to model the shape of illusory co...
Detecting Salient BlobLike Image Structures with a ScaleSpace Primal Sketch: A Method for FocusofAttention
 INT. J. COMP. VISION
, 1993
"... This article presents: (i) a multiscale representation of greylevel shape called the scalespace primal sketch, which makes explicit both features in scalespace and the relations between structures at different scales, (ii) a methodology for extracting significant bloblike image structures from ..."
Abstract

Cited by 151 (14 self)
 Add to MetaCart
This article presents: (i) a multiscale representation of greylevel shape called the scalespace primal sketch, which makes explicit both features in scalespace and the relations between structures at different scales, (ii) a methodology for extracting significant bloblike image structures from this representations, and (iii) applications to edge detection, histogram analysis, and junction classification demonstrating how the proposed method can be used for guiding later stage visual processes. The representation gives a qualitative description of image structure, which allows for detection of stable scales and associated regions of interest in a solely bottomup datadriven way. In other words, it generates coarse segmentation cues, and can hence be seen as preceding further processing, which can then be properly tuned. It is argued that once such information is available, many other processing tasks can become much simpler. Experiments on real imagery demonstrate that the proposed theory gives intuitive results.
Determining the Similarity of Deformable Shapes
 Vision Research
, 1995
"... We study how to measure the degree of similarity between two image contours. We propose an approach for comparing contours that takes into account deformations in object shape, the articulation of parts, and variations in the shape and size of portions of objects. Our method uses dynamic programming ..."
Abstract

Cited by 105 (7 self)
 Add to MetaCart
We study how to measure the degree of similarity between two image contours. We propose an approach for comparing contours that takes into account deformations in object shape, the articulation of parts, and variations in the shape and size of portions of objects. Our method uses dynamic programming to compute the minimum cost of bringing one shape into the other via local deformations. Using this as a starting point, we investigate the properties that such a cost function should have to model human performance and to perform usefully in a computer vision system. We suggest novel conditions on this cost function that help capture the partbased nature of objects without requiring any explicit decomposition of shapes into their parts. We then suggest several possible cost functions based on different physical models of contours, and describe experiments with these costs. 1 Introduction Detecting similarity is a key tool in interpretating images. In this paper we develop a measure of s...
A Computational Model for Visual Selection
 NEURAL COMPUTATION
, 1999
"... We propose a computational model for detecting and localizing instances from an object class in static grey level images. We divide detection into visual selection and final classification, concentrating on the former: Drastically reducing the number of candidate regions which require further, usual ..."
Abstract

Cited by 95 (14 self)
 Add to MetaCart
We propose a computational model for detecting and localizing instances from an object class in static grey level images. We divide detection into visual selection and final classification, concentrating on the former: Drastically reducing the number of candidate regions which require further, usually more intensive, processing, but with a minimum of computation and missed detections. Bottomup processing is based on local groupings of edge fragments constrained by loose geometrical relationships. They have no a priori semantic or geometric interpretation. The role of training is to select special groupings which are moderately likely at certain places on the object but rare in the background. We show that the statistics in both populations are stable. The candidate regions are those which contain global arrangements of several local groupings. Whereas our model was not conceived to explain brain functions, it does cohere with evidence about the functions of neurons in V1 and V2, such ...
Modelling and interpretation of architecture from several images
"... The modelling of 3dimensional (3D) environments has become a requirement for many applications in engineering design, virtual reality, visualisation and entertainment. However the scale and complexity demanded from such models has risen to the point where the acquisition of 3D models can require a ..."
Abstract

Cited by 83 (6 self)
 Add to MetaCart
The modelling of 3dimensional (3D) environments has become a requirement for many applications in engineering design, virtual reality, visualisation and entertainment. However the scale and complexity demanded from such models has risen to the point where the acquisition of 3D models can require a vast amount of specialist time and equipment. Because of this much research has been undertaken in the computer vision community into automating all or part of the process of acquiring a 3D model from a sequence of images. This thesis focuses specifically on the automatic acquisition of architectural models from short image sequences. An architectural model is defined as a set of planes corresponding to walls which contain a variety of labelled primitives such as doors and windows. As well as a label defining its type, each primitive contains parameters defining its shape and texture. The key advantage of this representation is that the model defines not only geometry and texture, but also an interpretation of the scene. This is crucial as it enables reasoning about the scene; for instance, structure and texture can be inferred in areas of the model which are unseen in any
2dshape analysis using conformal mapping
 Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2004
"... The study of 2D shapes and their similarities is a central problem in the field of vision. It arises in particular from the task of classifying and recognizing objects from their observed silhouette. Defining natural distances between 2D shapes creates a metric space of shapes, whose mathematical st ..."
Abstract

Cited by 61 (6 self)
 Add to MetaCart
The study of 2D shapes and their similarities is a central problem in the field of vision. It arises in particular from the task of classifying and recognizing objects from their observed silhouette. Defining natural distances between 2D shapes creates a metric space of shapes, whose mathematical structure is inherently relevant to the classification task. One intriguing metric space comes from using conformal mappings of 2D shapes into each other, via the theory of Teichmüller spaces. In this space every simple closed curve in the plane (a “shape”) is represented by a ‘fingerprint ’ which is a diffeomorphism of the unit circle to itself (a differentiable and invertible, periodic function). More precisely, every shape defines to a unique equivalence class of such diffeomorphisms up to right multiplication by aMöbius map. The fingerprint does not change if the shape is varied by translations and scaling and any such equivalence class comes from some shape. This coset space, equipped with the infinitesimal WeilPetersson (WP) Riemannian norm is a metric space. In this space, it appears very likely to be true that the shortest path between each two shapes is unique, and is given by a geodesic connecting them. Their distance from each other is given by integrating the WPnorm along that geodesic. In this paper we concentrate on solving the “welding ” problem of “sewing” together conformally the interior and exterior of the unit circle, glued on the unit circle by a given diffeomorphism, to obtain the unique 2D shape associated with this diffeomorphism. This will allow us to go back and forth between 2D shapes and their representing diffeomorphisms in this “space of shapes”. 1
Superquadrics for Segmenting and Modeling Range Data
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... We present a novel approach to reliable and efficient recovery of partdescriptions in terms of superquadric models from range data. We show that superquadrics can directly be recovered from unsegmented data, thus avoiding any presegmentation steps (e.g., in terms of surfaces). The approach is b ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
We present a novel approach to reliable and efficient recovery of partdescriptions in terms of superquadric models from range data. We show that superquadrics can directly be recovered from unsegmented data, thus avoiding any presegmentation steps (e.g., in terms of surfaces). The approach is based on the recoverandselect paradigm [10]. We present several experiments on real and synthetic range images, where we demonstrate the stability of the results with respect to viewpoint and noise.