Results 1  10
of
30
Canonic Representations for the Geometries of Multiple Projective Views
 Computer Vision and Image Understanding
, 1994
"... This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other t ..."
Abstract

Cited by 178 (8 self)
 Add to MetaCart
This work is in the context of motion and stereo analysis. It presents a new uni ed representation which will be useful when dealing with multiple views in the case of uncalibrated cameras. Several levels of information might be considered, depending on the availability of information. Among other things, an algebraic description of the epipolar geometry of N views is introduced, as well as a framework for camera selfcalibration, calibration updating, and structure from motion in an image sequence taken by a camera which is zooming and moving at the same time. We show how a special decomposition of a set of two or three general projection matrices, called canonical enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations. These representations are minimal and capture completely the properties of each level of description considered: Euclidean (in the context of calibration, and in the context of structure from motion, which we distinguish clearly), a ne, and projective, that we also relate to each other. In the last case, a new decomposition of the wellknown fundamental matrix is obtained. Dependencies, which appear when three or more views are available, are studied in the context of the canonic decomposition, and new composition formulas are established. The theory is illustrated by tutorial examples with real images.
Video Compass
 In Proc. ECCV
, 2002
"... Abstract. In this paper we describe a flexible approach for determining the relative orientation of the camera with respect to the scene. The main premise of the approach is the fact that in manmade environments, the majority of lines is aligned with the principal orthogonal directions of the world ..."
Abstract

Cited by 82 (6 self)
 Add to MetaCart
Abstract. In this paper we describe a flexible approach for determining the relative orientation of the camera with respect to the scene. The main premise of the approach is the fact that in manmade environments, the majority of lines is aligned with the principal orthogonal directions of the world coordinate frame. We exploit this observation towards efficient detection and estimation of vanishing points, which provide strong constraints on camera parameters and relative orientation of the camera with respect to the scene. By combining efficient image processing techniques in the line detection and initialization stage we demonstrate that simultaneous grouping and estimation of vanishing directions can be achieved in the absence of internal parameters of the camera. Constraints between vanishing points are then used for partial calibration and relative rotation estimation. The algorithm has been tested in a variety of indoors and outdoors scenes and its efficiency and automation makes it amenable for implementation on robotic platforms. Key words: Vanishing point estimation, relative orientation, calibration using vanishing points, vision guided mobile and aerial robots. 1
A new approach for vanishing point detection in architectural environments
 In Proc. 11th British Machine Vision Conference
, 2000
"... A manmade environment is characterized by a lot of parallel lines and a lot of orthogonal edges. In this article, a new method for detecting the three mutual orthogonal directions of such an environment is presented. Since realtime performance is not necessary for architectural application, like bu ..."
Abstract

Cited by 76 (1 self)
 Add to MetaCart
A manmade environment is characterized by a lot of parallel lines and a lot of orthogonal edges. In this article, a new method for detecting the three mutual orthogonal directions of such an environment is presented. Since realtime performance is not necessary for architectural application, like building reconstruction, a computationally more intensive approach was chosen. On the other hand, our approach is more rigorous than existing techniques, since the information given by the condition of three mutual orthogonal directions in the scene is identified and incorporated. Since knowledge about the camera geometry can be deduced from the vanishing points of three mutual orthogonal directions, we use this knowledge to reject falsely detected vanishing points. Results are presented from interpreting outdoor scenes of buildings. Key words Vanishing points, vanishing lines, geometric constraints, architecture, camera calibration
Vanishing point calculation as a statistical inference on the unit sphere
 In Proc. ICCV
, 1990
"... \Lambda In this paper vanishing point computation is characterized as a statistical estimation problem on the unit sphere; in particular as the estimation of the polar axis of an equatorial distribution. This framework facilitates the construction of confidence regions for 3D line orientation. ..."
Abstract

Cited by 66 (7 self)
 Add to MetaCart
\Lambda In this paper vanishing point computation is characterized as a statistical estimation problem on the unit sphere; in particular as the estimation of the polar axis of an equatorial distribution. This framework facilitates the construction of confidence regions for 3D line orientation.
Euclidian constraints for uncalibrated reconstruction
 In Proceedings Fourth International Conference on Computer Vision
, 1993
"... It is possible to recouer the threedimensional structure of a scene using images taken with ancalibrated cameras and pixel correspondences between these images. But such reconstruction can only be performed up to a projective transformation of the SD space. Therefore constraints have to be put on t ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
It is possible to recouer the threedimensional structure of a scene using images taken with ancalibrated cameras and pixel correspondences between these images. But such reconstruction can only be performed up to a projective transformation of the SD space. Therefore constraints have to be put on the reconstructed data in order to gel the reconstruction in the euclidean space. Such constraints arise from knowledge of the scene: location of points, geometrical constraints on lines, etc. We discuss here the kind of constraints that have to be added and show how they can be fed in a general fmmework. Experimental results on real data prove the feasability, and experiments on simulated data address the accuracy of the results. 1
Planar Grouping for Automatic Detection of Vanishing Lines and Points
 Image and Vision Computing
, 2000
"... It is demonstrated that grouping together features which satisfy a geometric relationship can be used both for (automatic) detection and estimation of vanishing points and lines. We describe the geometry of three commonly occurring types of geometric grouping and present efficient grouping algorithm ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
It is demonstrated that grouping together features which satisfy a geometric relationship can be used both for (automatic) detection and estimation of vanishing points and lines. We describe the geometry of three commonly occurring types of geometric grouping and present efficient grouping algorithms which exploit these geometries. The three types of grouping are : (1) a family of equally spaced coplanar parallel lines, (2) a planar pattern obtained by repeating some element by translation in the plane, and (3) a set of elements arranged in a regular planar grid. Examples of automatically computing groupings, together with their vanishing points and lines, are given for a number of real images. Key words: Grouping, Vanishing Point and Line Detection, Repetition. 1 Introduction Suppose a plane in the world is imaged by a perspective camera. Then the line at infinity of the plane is projected to a line in the image, the vanishing line. The objective of this paper is to automatically e...
Accurate Projective Reconstruction
, 1993
"... . It is possible to recover the threedimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences. But such a reconstruction can only be computed up to a projective transformation of the 3D space. Therefore, constraints have to be added to the reconstructe ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
. It is possible to recover the threedimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences. But such a reconstruction can only be computed up to a projective transformation of the 3D space. Therefore, constraints have to be added to the reconstructed data in order to get the reconstruction in the euclidean space. Such constraints arise from knowledge of the scene: location of points, geometrical constraints on lines, etc. We first discuss here the type of constraints that have to be added then we show how they can be fed into a general framework. Experiments prove that the accuracy needed for industrial applications is reachable when measurements in the image have subpixel accuracy. Therefore, we show how a real camera can be mapped into an accurate projective camera and how accurate point detection improve the reconstruction results. 1 Introduction One of the principal goals of research in computer vision is to enable machines to per...
A Survey of MotionParallaxBased 3D Reconstruction Algorithms
 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS
, 2004
"... The task of recovering threedimensional (3D) geometry from twodimensional views of a scene is called 3D reconstruction. It is an extremely active research area in computer vision. There is a large body of 3D reconstruction algorithms available in the literature. These algorithms are often desig ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
The task of recovering threedimensional (3D) geometry from twodimensional views of a scene is called 3D reconstruction. It is an extremely active research area in computer vision. There is a large body of 3D reconstruction algorithms available in the literature. These algorithms are often designed to provide different tradeoffs between speed, accuracy, and practicality. In addition, even the output of various algorithms can be quite different. For example, some algorithms only produce a sparse 3D reconstruction while others are able to output a dense reconstruction. The selection of the appropriate 3D reconstruction algorithm relies heavily on the intended application as well as the available resources. The goal of this paper is to review some of the commonly used motionparallaxbased 3D reconstruction techniques and make clear the assumptions under which they are designed. To do so efficiently, we classify the reviewed reconstruction algorithms into two large categories depending on whether a prior calibration