Results 1 - 10
of
85
Geometric Properties of Central Catadioptric Line Images and their Application in Calibration
- IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... Abstract—In central catadioptric systems, lines in a scene are projected to conic curves in the image. This work studies the geometry of the central catadioptric projection of lines and its use in calibration. It is shown that the conic curves where the lines are mapped possess several projective in ..."
Abstract
-
Cited by 80 (9 self)
- Add to MetaCart
(Show Context)
Abstract—In central catadioptric systems, lines in a scene are projected to conic curves in the image. This work studies the geometry of the central catadioptric projection of lines and its use in calibration. It is shown that the conic curves where the lines are mapped possess several projective invariant properties. From these properties, it follows that any central catadioptric system can be fully calibrated from an image of three or more lines. The image of the absolute conic, the relative pose between the camera and the mirror, and the shape of the reflective surface can be recovered using a geometric construction based on the conic loci where the lines are projected. This result is valid for any central catadioptric system and generalizes previous results for paracatadioptric sensors. Moreover, it is proven that systems with a hyperbolic/elliptical mirror can be calibrated from the image of two lines. If both the shape and the pose of the mirror are known, then two line images are enough to determine the image of the absolute conic encoding the camera’s intrinsic parameters. The sensitivity to errors is evaluated and the approach is used to calibrate a real camera. Index Terms—Catadioptric, omnidirectional vision, projective geometry, lines, calibration. 1
Structure from Motion with Wide Circular Field of View Cameras
- IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2006
"... Abstract—This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180 field of view and f ..."
Abstract
-
Cited by 65 (7 self)
- Add to MetaCart
(Show Context)
Abstract—This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180 field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183), Sigma 8mm-f4-EX (180), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors. Index Terms—Omnidirectional vision, fish-eye lens, catadioptric camera, autocalibration. 1
A flexible technique for accurate omnidirectional camera calibration and structure from motion
- In Proc. of IEEE Intl. Conf. of Vision Systems
, 2006
"... In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowled ..."
Abstract
-
Cited by 60 (12 self)
- Add to MetaCart
In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowledge of the motion is required, nor a specific model of the omnidirectional sensor. The only assumption is that the image projection function can be described by a Taylor series expansion whose coefficients are estimated by solving a two-step least-squares linear minimization problem. To test the proposed technique, we calibrated a panoramic camera having a field of view greater than 200° in the vertical direction, and we obtained very good results. To investigate the accuracy of the calibration, we also used the estimated omni-camera model in a structure from motion experiment. We obtained a 3D metric reconstruction of a scene from two highly distorted omnidirectional images by using image correspondences only. Compared with classical techniques, which rely on a specific parametric model of the omnidirectional camera, the proposed procedure is independent of the sensor, easy to use, and flexible. 1.
A Toolbox for Easily Calibrating Omnidirectional Cameras
- In Proc. of the IEEE International Conference on Intelligent Systems, IROS06
, 2006
"... Abstract- In this paper, we present a novel technique for calibrating central omnidirectional cameras. The proposed procedure is very fast and completely automatic, as the user is only asked to collect a few images of a checker board, and click on its corner points. In contrast with previous approac ..."
Abstract
-
Cited by 44 (3 self)
- Add to MetaCart
(Show Context)
Abstract- In this paper, we present a novel technique for calibrating central omnidirectional cameras. The proposed procedure is very fast and completely automatic, as the user is only asked to collect a few images of a checker board, and click on its corner points. In contrast with previous approaches, this technique does not use any specific model of the omnidirectional sensor. It only assumes that the imaging function can be described by a Taylor series expansion whose coefficients are estimated by solving a four-step least-squares linear minimization problem, followed by a non-linear refinement based on the maximum likelihood criterion. To validate the proposed technique, and evaluate its performance, we apply the calibration on both simulated and real data. Moreover, we show the calibration accuracy by projecting the color information of a calibrated camera on real 3D points extracted by a 3D sick laser range finder. Finally, we provide a Toolbox which implements the proposed calibration procedure. Index Terms – catadioptric, omnidirectional, camera, calibration, toolbox.
Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes
, 2007
"... In this paper, we describe a new approach for the extrinsic calibration of a camera with a 3D laser range finder, that can be done on the fly. This approach does not require any calibration object. Only few point correspondences (at least 4) are used, which are manually selected by the user from a ..."
Abstract
-
Cited by 37 (1 self)
- Add to MetaCart
In this paper, we describe a new approach for the extrinsic calibration of a camera with a 3D laser range finder, that can be done on the fly. This approach does not require any calibration object. Only few point correspondences (at least 4) are used, which are manually selected by the user from a scene viewed by the two sensors. The proposed method relies on a novel technique to visualize the range information obtained from a 3D laser scanner. This technique converts the visually ambiguous 3D range information into a 2D map where natural features of a scene are highlighted. These features represent depth discontinuities and direction changes in the range image. We show that by enhancing the features the user can easily find the corresponding points of the camera image points. Therefore, visually identifying laser-camera correspondences becomes as easy as image pairing. Once point correspondences are given, extrinsic calibration is done using the well-known PnP algorithm followed by a non-linear refinement process. We show the performance of our approach through experimental results. In these experiments, we will use an omnidirectional camera. The implication of this method is important because it brings 3D computer vision systems out of the laboratory and into practical use.
Image-based visual servoing for nonholonomic mobile robots using epipolar geometry
- IEEE Transactions on Robotics
, 2007
"... Abstract—We present an image-based visual servoing strategy for driving a nonholonomic mobile robot equipped with a pinhole camera toward a desired configuration. The proposed approach, which exploits the epipolar geometry defined by the current and desired camera views, does not need any knowledge ..."
Abstract
-
Cited by 36 (3 self)
- Add to MetaCart
(Show Context)
Abstract—We present an image-based visual servoing strategy for driving a nonholonomic mobile robot equipped with a pinhole camera toward a desired configuration. The proposed approach, which exploits the epipolar geometry defined by the current and desired camera views, does not need any knowledge of the 3-D scene geometry. The control scheme is divided into two steps. In the first, using an approximate input–output linearizing feedback, the epipoles are zeroed so as to align the robot with the goal. Feature points are then used in the second translational step to reach the desired configuration. Asymptotic convergence to the desired configuration is proven, both in the calibrated and partially calibrated case. Simulation and experimental results show the effectiveness of the proposed control scheme. Index Terms—Epipolar geometry, image-based visual servoing (IBVS), input–output feedback linearization, nonholonomic mobile robots. I.
Dynamosaics: Video mosaics with non-chronological time
- In CVPR ’05
, 2005
"... With the limited field of view of human vision, our perception of most scenes is built over time while our eyes are scanning the scene. In the case of static scenes this process can be modeled by panoramic mosaicing: stitching together images into a panoramic view. Can a dynamic scene, scanned by a ..."
Abstract
-
Cited by 29 (7 self)
- Add to MetaCart
(Show Context)
With the limited field of view of human vision, our perception of most scenes is built over time while our eyes are scanning the scene. In the case of static scenes this process can be modeled by panoramic mosaicing: stitching together images into a panoramic view. Can a dynamic scene, scanned by a video camera, be represented with a dynamic panoramic video even though different regions were visible at different times? In this paper we explore time flow manipulation in video, such as the creation of new videos in which events that occurred at different times are displayed simultaneously. More general changes in the time flow are also possible, which enable re-scheduling the order of dynamic events in the video, for example. We generate dynamic mosaics by sweeping the aligned space-time volume of the input video by a time front surface and generating a sequence of time slices in the process. Various sweeping strategies and different time front evolutions manipulate the time flow in the video, enabling many unexplored and powerful effects, such as panoramic movies. 1
Multi-View Geometry of the Refractive Plane
"... Transparent refractive objects are one of the main problems in geometric vision that have been largely unexplored. The imaging and multi-view geometry of scenes with transparent or translucent objects with refractive properties is relatively less well understood than for opaque objects. The main obj ..."
Abstract
-
Cited by 18 (2 self)
- Add to MetaCart
(Show Context)
Transparent refractive objects are one of the main problems in geometric vision that have been largely unexplored. The imaging and multi-view geometry of scenes with transparent or translucent objects with refractive properties is relatively less well understood than for opaque objects. The main objective of our work is to analyze the underlying multi-view relationships between cameras, when the scene being viewed contains a single refractive planar surface separating two different media. Such a situation might occur in scenarios like underwater photography. Our main result is to show the existence of geometric entities like the fundamental matrix, and the homography matrix in such instances. In addition, under special circumstances we also show how to compute the relative pose between two cameras immersed in one of the two media. 1
Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras
"... Abstract. Wepresentatechniqueformodelingnon-centralcatadioptric cameras consisting of a perspective camera and a rotationally symmetric conic reflector. While previous approaches use a central approximation and/or iterative methods for forward projection, we present an analytical solution. This allo ..."
Abstract
-
Cited by 18 (8 self)
- Add to MetaCart
(Show Context)
Abstract. Wepresentatechniqueformodelingnon-centralcatadioptric cameras consisting of a perspective camera and a rotationally symmetric conic reflector. While previous approaches use a central approximation and/or iterative methods for forward projection, we present an analytical solution. This allows computation of the optical path from a given 3D point to the given viewpoint by solving a 6 th degree forward projection equation for general conic mirrors. For a spherical mirror, the forward projection reduces to a 4 th degree equation, resulting in a closed form solution. We also derive the forward projection equation for imaging through a refractive sphere (non-central dioptric camera) and show that it is a 10 th degree equation. While central catadioptric cameras lead to conic epipolar curves, we show the existence of a quartic epipolar curve for catadioptric systems using a spherical mirror. The analytical forward projection leads to accurate and fast 3D reconstruction via bundle adjustment. Simulations and real results on single image sparse 3D reconstruction are presented. We demonstrate ∼ 100 times speed up using the analytical solution over iterative forward projection for 3D reconstruction using spherical mirrors. 1