Results 1  10
of
185
Distinctive Image Features from ScaleInvariant Keypoints
, 2003
"... This paper presents a method for extracting distinctive invariant features from images, which can be used to perform reliable matching between different images of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a a substa ..."
Abstract

Cited by 5079 (20 self)
 Add to MetaCart
This paper presents a method for extracting distinctive invariant features from images, which can be used to perform reliable matching between different images of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a a substantial range of affine distortion, addition of noise, change in 3D viewpoint, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearestneighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through leastsquares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near realtime performance.
Determining the Epipolar Geometry and its Uncertainty: A Review
 International Journal of Computer Vision
, 1998
"... Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, an ..."
Abstract

Cited by 319 (7 self)
 Add to MetaCart
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A wellfounded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.
View morphing
 In Computer Graphics (SIGGRAPHâ€™96
, 1996
"... Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple ex ..."
Abstract

Cited by 232 (20 self)
 Add to MetaCart
Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this paper introduces a simple extension to image morphing that correctly handles 3D projective camera and scene transformations. The technique, called view morphing, works by prewarping two images prior to computing a morph and then postwarping the interpolated images. Because no knowledge of 3D shape is required, the technique may be applied to photographs and drawings, as well as rendered scenes. The ability to synthesize changes both in viewpoint and image structure affords a wide variety of interesting 3D effects via simple image transformations.
Canonic Representations for the Geometries of Multiple Projective Views
 Computer Vision and Image Understanding
, 1994
"... This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other t ..."
Abstract

Cited by 178 (8 self)
 Add to MetaCart
This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other things, an algebraic description of the epipolar geometry of N views is introduced, as well as a framework for camera selfcalibration, calibration updating, and structure from motion in an image sequence taken by a camera which is zooming and moving at the same time. We show how a special decomposition of a set of two or three general projection matrices, called canonical enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations. These representations are minimal and capture completely the properties of each level of description considered: Euclidean (in the context of calibration, and in the context of structure from motion, which we distinguish clearly), a ne, and projective, that we also relate to each other. In the last case, a new decomposition of the wellknown fundamental matrix is obtained. Dependencies, which appear when three or more views are available, are studied in the context of the canonic decomposition, and new composition formulas are established. The theory is illustrated by tutorial examples with real images.
Wide Baseline Stereo Matching based on Local, Affinely Invariant Regions
 In Proc. BMVC
, 2000
"... `Invariant regions' are image patches that automatically deform with changing viewpoint as to keep on covering identical physical parts of a scene. Such regions are then described by a set of invariant features, which makes it relatively easy to match them between views and under changing illuminati ..."
Abstract

Cited by 167 (5 self)
 Add to MetaCart
`Invariant regions' are image patches that automatically deform with changing viewpoint as to keep on covering identical physical parts of a scene. Such regions are then described by a set of invariant features, which makes it relatively easy to match them between views and under changing illumination. In previous work, we have presented invariant regions that are based on a combination of corners and edges. The application discussed then was image database retrieval. Here, an alternative method for extracting (affinely) invariant regions is given, that does not depend on the presence of edges or corners in the image but is purely intensitybased. Also, we demonstrate the use of such regions for another application, which is wide baseline stereo matching. As a matter of fact, the goal is to build an opportunistic system that exploits several types of invariant regions as it sees fit. This yields more correspondences and a system that can deal with a wider range of images. To increase t...
On the geometry and algebra of the point and line correspondences between N images
, 1995
"... We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effect ..."
Abstract

Cited by 149 (6 self)
 Add to MetaCart
We explore the geometric and algebraic relations that exist between correspondences of points and lines in an arbitrary number of images. We propose to use the formalism of the GrassmannCayley algebra as the simplest way to make both geometric and algebraic statements in a very synthetic and effective way (i.e. allowing actual computation if needed). We have a fairly complete picture of the situation in the case of points: there are only three types of algebraic relations which are satisfied by the coordinates of the images of a 3D point: bilinear relations arising when we consider pairs of images among the N and which are the wellknown epipolar constraints, trilinear relations arising when we consider triples of images among the N , and quadrilinear relations arising when we consider fourtuples of images among the N . In the case of lines, we show how the traditional perspective projection equation can be suitably generalized and that in the case of three images there exist two in...
Robust parameter estimation in computer vision
 SIAM Reviews
, 1999
"... Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techni ..."
Abstract

Cited by 129 (10 self)
 Add to MetaCart
Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are leastmedian of
The Visual Motion of Curves and Surfaces
, 1998
"... This paper addresses the problem of recovering the 3D shape and motion of curves and surfaces from image sequences of apparent contours. For known viewer motion the visible surfaces can then be reconstructed by exploiting a spatiotemporal parametrization of the apparent contours and contour generat ..."
Abstract

Cited by 102 (16 self)
 Add to MetaCart
This paper addresses the problem of recovering the 3D shape and motion of curves and surfaces from image sequences of apparent contours. For known viewer motion the visible surfaces can then be reconstructed by exploiting a spatiotemporal parametrization of the apparent contours and contour generators under viewer motion. A natural parametrization exploits the contour generators and the epipolar geometry between successive viewpoints. The epipolar parametrization (Cipolla & Blake 1992) leads to simplified expressions for the recovery of depth and surface curvatures from image velocities and accelerations and known viewer motion. The parametrization is, however, degenerate when the apparent contour is singular since the ray is tangent to the contour generator (Koenderink & Van Doorn 1976) and at frontier points (Giblin & Weiss 1994) when the epipolar plane is a tangent plane to the surface. At these isolated points the epipolar parametrization can no longer be used to recover the local surface geometry. This paper reviews the epipolar parametrization and shows how the degenerate cases can be used to recover surface geometry and unknown viewer motion from apparent contours of curved surfaces. Practical implementations are outlined. 1. Introduction
SelfCalibration of a Moving Camera From Point Correspondences and Fundamental Matrices
, 1997
"... . We address the problem of estimating threedimensional motion, and structure from motion with an uncalibrated moving camera. We show that point correspondences between three images, and the fundamental matrices computed from these point correspondences, are sufficient to recover the internal orien ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
. We address the problem of estimating threedimensional motion, and structure from motion with an uncalibrated moving camera. We show that point correspondences between three images, and the fundamental matrices computed from these point correspondences, are sufficient to recover the internal orientation of the camera (its calibration), the motion parameters, and to compute coherent perspective projection matrices which enable us to reconstruct 3D structure up to a similarity. In contrast with other methods, no calibration object with a known 3D shape is needed, and no limitations are put upon the unknown motions to be performed or the parameters to be recovered, as long as they define a projective camera. The theory of the method, which is based on the constraint that the observed points are part of a static scene, thus allowing us to link the intrinsic parameters and the fundamental matrix via the absolute conic, is first detailed. Several algorithms are then presented, and their ...