• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Epipolar Geometry for Central Catadioptric Cameras (2002)

by T Svoboda, T Pajdla
Venue:IJCV
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 85
Next 10 →

Geometric Properties of Central Catadioptric Line Images and their Application in Calibration

by João P. Barreto, Helder Araujo - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2005
"... Abstract—In central catadioptric systems, lines in a scene are projected to conic curves in the image. This work studies the geometry of the central catadioptric projection of lines and its use in calibration. It is shown that the conic curves where the lines are mapped possess several projective in ..."
Abstract - Cited by 80 (9 self) - Add to MetaCart
Abstract—In central catadioptric systems, lines in a scene are projected to conic curves in the image. This work studies the geometry of the central catadioptric projection of lines and its use in calibration. It is shown that the conic curves where the lines are mapped possess several projective invariant properties. From these properties, it follows that any central catadioptric system can be fully calibrated from an image of three or more lines. The image of the absolute conic, the relative pose between the camera and the mirror, and the shape of the reflective surface can be recovered using a geometric construction based on the conic loci where the lines are projected. This result is valid for any central catadioptric system and generalizes previous results for paracatadioptric sensors. Moreover, it is proven that systems with a hyperbolic/elliptical mirror can be calibrated from the image of two lines. If both the shape and the pose of the mirror are known, then two line images are enough to determine the image of the absolute conic encoding the camera’s intrinsic parameters. The sensitivity to errors is evaluated and the approach is used to calibrate a real camera. Index Terms—Catadioptric, omnidirectional vision, projective geometry, lines, calibration. 1
(Show Context)

Citation Context

... combine two important features: a wide field of view and a single projection center. The nonlinear functions which map points in the 3D world to points in the central catadioptric image are found in =-=[3]-=-. This work also shows that the nonlinear mapping results in a line in the scene being projected into a conic curve. In [4], Geyer and Daniilidis propose a unifying theory for all central catadioptric...

Structure from Motion with Wide Circular Field of View Cameras

by Branislav Micusík, Tomás Pajdla - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2006
"... Abstract—This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180 field of view and f ..."
Abstract - Cited by 65 (7 self) - Add to MetaCart
Abstract—This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180 field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183), Sigma 8mm-f4-EX (180), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors. Index Terms—Omnidirectional vision, fish-eye lens, catadioptric camera, autocalibration. 1
(Show Context)

Citation Context

...y those that are necessary to demonstrate our autocalibration method. 3.3 Epipolar Geometry The epipolar geometry can be formulated for central omnidirectional central cameras, i.e., for catadioptric =-=[43]-=- and for dioptric (with fish-eye lenses) omnidirectional cameras. The epipolar constraint for vectors p00 1 and p00 2 reads ass1140 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL....

A flexible technique for accurate omnidirectional camera calibration and structure from motion

by Davide Scaramuzza, Agostino Martinelli - In Proc. of IEEE Intl. Conf. of Vision Systems , 2006
"... In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowled ..."
Abstract - Cited by 60 (12 self) - Add to MetaCart
In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowledge of the motion is required, nor a specific model of the omnidirectional sensor. The only assumption is that the image projection function can be described by a Taylor series expansion whose coefficients are estimated by solving a two-step least-squares linear minimization problem. To test the proposed technique, we calibrated a panoramic camera having a field of view greater than 200° in the vertical direction, and we obtained very good results. To investigate the accuracy of the calibration, we also used the estimated omni-camera model in a structure from motion experiment. We obtained a 3D metric reconstruction of a scene from two highly distorted omnidirectional images by using image correspondences only. Compared with classical techniques, which rely on a specific parametric model of the omnidirectional camera, the proposed procedure is independent of the sensor, easy to use, and flexible. 1.

Estimation of omnidirectional camera model from epipolar geometry

by Branislav Micusik , Tomas Pajdla , 2003
"... ..."
Abstract - Cited by 52 (9 self) - Add to MetaCart
Abstract not found

A Toolbox for Easily Calibrating Omnidirectional Cameras

by Davide Scaramuzza, Agostino Martinelli - In Proc. of the IEEE International Conference on Intelligent Systems, IROS06 , 2006
"... Abstract- In this paper, we present a novel technique for calibrating central omnidirectional cameras. The proposed procedure is very fast and completely automatic, as the user is only asked to collect a few images of a checker board, and click on its corner points. In contrast with previous approac ..."
Abstract - Cited by 44 (3 self) - Add to MetaCart
Abstract- In this paper, we present a novel technique for calibrating central omnidirectional cameras. The proposed procedure is very fast and completely automatic, as the user is only asked to collect a few images of a checker board, and click on its corner points. In contrast with previous approaches, this technique does not use any specific model of the omnidirectional sensor. It only assumes that the imaging function can be described by a Taylor series expansion whose coefficients are estimated by solving a four-step least-squares linear minimization problem, followed by a non-linear refinement based on the maximum likelihood criterion. To validate the proposed technique, and evaluate its performance, we apply the calibration on both simulated and real data. Moreover, we show the calibration accuracy by projecting the color information of a calibrated camera on real 3D points extracted by a 3D sick laser range finder. Finally, we provide a Toolbox which implements the proposed calibration procedure. Index Terms – catadioptric, omnidirectional, camera, calibration, toolbox.
(Show Context)

Citation Context

... expressed in pixel coordinates. (b) and (c) are related by an affine transformation. Function f can have various forms related to the mirror or the lens construction. These functions can be found in =-=[10, 11, 12]-=-. Unlike using a specific model for the sensor in use, we choose to apply a generalized parametric model of f , which is suitable to different kinds of sensors. The reason for doing so, is that we wan...

Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes

by D. Scaramuzza, A. Harati, R. Siegwart , 2007
"... In this paper, we describe a new approach for the extrinsic calibration of a camera with a 3D laser range finder, that can be done on the fly. This approach does not require any calibration object. Only few point correspondences (at least 4) are used, which are manually selected by the user from a ..."
Abstract - Cited by 37 (1 self) - Add to MetaCart
In this paper, we describe a new approach for the extrinsic calibration of a camera with a 3D laser range finder, that can be done on the fly. This approach does not require any calibration object. Only few point correspondences (at least 4) are used, which are manually selected by the user from a scene viewed by the two sensors. The proposed method relies on a novel technique to visualize the range information obtained from a 3D laser scanner. This technique converts the visually ambiguous 3D range information into a 2D map where natural features of a scene are highlighted. These features represent depth discontinuities and direction changes in the range image. We show that by enhancing the features the user can easily find the corresponding points of the camera image points. Therefore, visually identifying laser-camera correspondences becomes as easy as image pairing. Once point correspondences are given, extrinsic calibration is done using the well-known PnP algorithm followed by a non-linear refinement process. We show the performance of our approach through experimental results. In these experiments, we will use an omnidirectional camera. The implication of this method is important because it brings 3D computer vision systems out of the laboratory and into practical use.

Image-based visual servoing for nonholonomic mobile robots using epipolar geometry

by Gian Luca Mariottini, Senior Member, Domenico Prattichizzo - IEEE Transactions on Robotics , 2007
"... Abstract—We present an image-based visual servoing strategy for driving a nonholonomic mobile robot equipped with a pinhole camera toward a desired configuration. The proposed approach, which exploits the epipolar geometry defined by the current and desired camera views, does not need any knowledge ..."
Abstract - Cited by 36 (3 self) - Add to MetaCart
Abstract—We present an image-based visual servoing strategy for driving a nonholonomic mobile robot equipped with a pinhole camera toward a desired configuration. The proposed approach, which exploits the epipolar geometry defined by the current and desired camera views, does not need any knowledge of the 3-D scene geometry. The control scheme is divided into two steps. In the first, using an approximate input–output linearizing feedback, the epipoles are zeroed so as to align the robot with the goal. Feature points are then used in the second translational step to reach the desired configuration. Asymptotic convergence to the desired configuration is proven, both in the calibrated and partially calibrated case. Simulation and experimental results show the effectiveness of the proposed control scheme. Index Terms—Epipolar geometry, image-based visual servoing (IBVS), input–output feedback linearization, nonholonomic mobile robots. I.
(Show Context)

Citation Context

...mptotically stable control law. • No metrical knowledge of the 3-D scene geometry is necessary, because epipoles can be computed from corresponding feature points in the current and desired view [9], =-=[25]-=-. • The visibility constraint is automatically satisfied by the adoption of a central catadioptric camera as a vision sensor. The paper is organized as follows. Section II introduces the basics of cen...

Dynamosaics: Video mosaics with non-chronological time

by Alex Rav-acha, Yael Pritch, Dani Lischinski, Shmuel Peleg - In CVPR ’05 , 2005
"... With the limited field of view of human vision, our perception of most scenes is built over time while our eyes are scanning the scene. In the case of static scenes this process can be modeled by panoramic mosaicing: stitching together images into a panoramic view. Can a dynamic scene, scanned by a ..."
Abstract - Cited by 29 (7 self) - Add to MetaCart
With the limited field of view of human vision, our perception of most scenes is built over time while our eyes are scanning the scene. In the case of static scenes this process can be modeled by panoramic mosaicing: stitching together images into a panoramic view. Can a dynamic scene, scanned by a video camera, be represented with a dynamic panoramic video even though different regions were visible at different times? In this paper we explore time flow manipulation in video, such as the creation of new videos in which events that occurred at different times are displayed simultaneously. More general changes in the time flow are also possible, which enable re-scheduling the order of dynamic events in the video, for example. We generate dynamic mosaics by sweeping the aligned space-time volume of the input video by a time front surface and generating a sequence of time slices in the process. Various sweeping strategies and different time front evolutions manipulate the time flow in the video, enabling many unexplored and powerful effects, such as panoramic movies. 1
(Show Context)

Citation Context

...ate a single mosaic image, we use mosaicing to generate a new video having a desired time manipulation. The creation of dynamic panoramic movies can alternatively be done with panoramic video cameras =-=[8, 16]-=- or with multiple video cameras covering the scene [17, 14]. An attempt to incorporate the panoramic view with the dynamic scene using a single video camera was proposed in [5]. The original video fra...

Multi-View Geometry of the Refractive Plane

by Visesh Chari, Peter Sturm
"... Transparent refractive objects are one of the main problems in geometric vision that have been largely unexplored. The imaging and multi-view geometry of scenes with transparent or translucent objects with refractive properties is relatively less well understood than for opaque objects. The main obj ..."
Abstract - Cited by 18 (2 self) - Add to MetaCart
Transparent refractive objects are one of the main problems in geometric vision that have been largely unexplored. The imaging and multi-view geometry of scenes with transparent or translucent objects with refractive properties is relatively less well understood than for opaque objects. The main objective of our work is to analyze the underlying multi-view relationships between cameras, when the scene being viewed contains a single refractive planar surface separating two different media. Such a situation might occur in scenarios like underwater photography. Our main result is to show the existence of geometric entities like the fundamental matrix, and the homography matrix in such instances. In addition, under special circumstances we also show how to compute the relative pose between two cameras immersed in one of the two media. 1
(Show Context)

Citation Context

...ry resulting from opaque scenes is now well understood, for the case of perspective projection. To some extent, even the insertion of reflective elements has been studied in the area of catadioptrics =-=[14, 13]-=-. The phenomenon of refraction, however, has largely been left un-addressed in the vision community. The introduction of refractive elements into a scene changes the multi-view geometry that results f...

Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras

by Amit Agrawal, Yuichi Taguchi, Srikumar Ramalingam
"... Abstract. Wepresentatechniqueformodelingnon-centralcatadioptric cameras consisting of a perspective camera and a rotationally symmetric conic reflector. While previous approaches use a central approximation and/or iterative methods for forward projection, we present an analytical solution. This allo ..."
Abstract - Cited by 18 (8 self) - Add to MetaCart
Abstract. Wepresentatechniqueformodelingnon-centralcatadioptric cameras consisting of a perspective camera and a rotationally symmetric conic reflector. While previous approaches use a central approximation and/or iterative methods for forward projection, we present an analytical solution. This allows computation of the optical path from a given 3D point to the given viewpoint by solving a 6 th degree forward projection equation for general conic mirrors. For a spherical mirror, the forward projection reduces to a 4 th degree equation, resulting in a closed form solution. We also derive the forward projection equation for imaging through a refractive sphere (non-central dioptric camera) and show that it is a 10 th degree equation. While central catadioptric cameras lead to conic epipolar curves, we show the existence of a quartic epipolar curve for catadioptric systems using a spherical mirror. The analytical forward projection leads to accurate and fast 3D reconstruction via bundle adjustment. Simulations and real results on single image sparse 3D reconstruction are presented. We demonstrate ∼ 100 times speed up using the analytical solution over iterative forward projection for 3D reconstruction using spherical mirrors. 1
(Show Context)

Citation Context

...2‖. Spherical Mirror: Substituting A = 1,B = 0,C = r 2 , where r is the mirror radius, results in a 4 th order forward projection equation u 2 (r 2 (d+y)−2dy 2 ) 2 −(r 2 −y 2 )(r 2 (d+v)−2dvy) 2 = 0. =-=(3)-=- Thus, a close form solution for y can be obtained. Notice that for a spherical mirror, the pinhole location is not restricted. For any pinhole location, anew axis 1 Matlab code and intermediate steps...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University