Results 1  10
of
57
Mosaicing New Views: The CrossedSlits Projection
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract—We introduce a new kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the CrossedSlits (XSlits) projection. In this ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a new kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the CrossedSlits (XSlits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. XSlits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a Xslits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions. Index Terms—Nonstationary mosaicing, crossedslits projection, pushbroom camera, virtual walkthrough, imagebased rendering. 1
Multiview geometry for general camera models
 PROCEEDINGS OF THE 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR’05)  VOLUME 1
, 2005
"... We consider calibration and structure from motion tasks for a previously introduced, highly general imaging model, where cameras are modeled as possibly unconstrained sets of projection rays. This allows to describe most existing camera types (at least for those operating in the visible domain), in ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
We consider calibration and structure from motion tasks for a previously introduced, highly general imaging model, where cameras are modeled as possibly unconstrained sets of projection rays. This allows to describe most existing camera types (at least for those operating in the visible domain), including pinhole cameras, sensors with radial or more general distortions, catadioptric cameras (central or noncentral), etc. Generic algorithms for calibration and structure from motion tasks (pose and motion estimation and 3D point triangulation) are outlined. The foundation for a multiview geometry of noncentral cameras is given, leading to the formulation of multiview matching tensors, analogous to the fundamental matrices, trifocal and quadrifocal tensors of perspective cameras. Besides this, we also introduce a natural hierarchy of camera models: the most general model has unconstrained projection rays whereas the most constrained model dealt with here is the central model, where all rays pass through a single point.
NonSingle Viewpoint Catadioptric Cameras: Geometry and Analysis
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2001
"... ..."
On the Epipolar Geometry of the CrossedSlits Projection
, 2003
"... The CrossedSlits (XSlits) camera is defined by two nonintersecting slits, which replace the pinhole in the common perspective camera. Each point in space is projected to the image plane by a ray which passes through the point and the two slits. The XSlits projection model includes the pushbroom c ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
The CrossedSlits (XSlits) camera is defined by two nonintersecting slits, which replace the pinhole in the common perspective camera. Each point in space is projected to the image plane by a ray which passes through the point and the two slits. The XSlits projection model includes the pushbroom camera as a special case. In addition, it describes a certain class of panoramic images, which are generated from sequences obtained by translating pinhole cameras. In this paper
Towards complete generic camera calibration
 In CVPR
, 2005
"... We consider the problem of calibrating a highly generic imaging model, that consists of a nonparametric association of a projection ray in 3D to every pixel in an image. Previous calibration approaches for this model do not seem to be directly applicable for cameras with large fields of view and no ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
We consider the problem of calibrating a highly generic imaging model, that consists of a nonparametric association of a projection ray in 3D to every pixel in an image. Previous calibration approaches for this model do not seem to be directly applicable for cameras with large fields of view and noncentral cameras. In this paper, we describe a complete calibration approach that should in principle be able to handle any camera that can be described by the generic imaging model. Initial calibration is performed using multiple images of overlapping calibration grids simultaneously. This is then improved using pose estimation and bundle adjustmenttype algorithms. The approach has been applied on a wide variety of central and noncentral cameras including fisheye lens, catadioptric cameras with spherical and hyperbolic mirrors, and multicamera setups. We also consider the question if noncentral models are more appropriate for certain cameras than central models. 1.
Stereo Reconstruction from Multiperspective Panoramas
, 2004
"... A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective pa ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of singleperspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.
Geometry of TwoSlit Camera
, 2002
"... We analyze the geometry of the twoslit camera and come to two conclusions. First, we show that the definition given by et al. [9] makes sense only if the two slits are not intersecting. Secondly, we prove that the complete image from a twoslit camera cannot be obtained as an intersection of the ra ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We analyze the geometry of the twoslit camera and come to two conclusions. First, we show that the definition given by et al. [9] makes sense only if the two slits are not intersecting. Secondly, we prove that the complete image from a twoslit camera cannot be obtained as an intersection of the rays of the twoslit camera with a plane in space. Motivated by the quest for a unified representation of various cameras by simple geometrical objects, we give a new definition of linear oblique cameras as those which comprise all real lines incident with some nonreal line and show that it is equivalent with the definition we gave earlier. We also show that no single line neither in the real projective space nor in its coplexification, can be used to define analogously a twoslit camera.
Multiperspective stereo matching and volumetric reconstruction
 In ICCV
, 2009
"... Stereo matching and volumetric reconstruction are the most explored 3D scene recovery techniques in computer vision. Many existing approaches assume perspective input images and use the epipolar constraint to reduce the search space and improve the accuracy. In this paper we present a novel framewor ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
Stereo matching and volumetric reconstruction are the most explored 3D scene recovery techniques in computer vision. Many existing approaches assume perspective input images and use the epipolar constraint to reduce the search space and improve the accuracy. In this paper we present a novel framework that uses multiperspective cameras for stereo matching and volumetric reconstruction. Our approach first decomposes a multiperspective camera into piecewise primitive General Linear Cameras or GLCs [32]. A pair of GLCs in general do not satisfy the epipolar constraint. However, they still form a nearly stereo pair. We develop a new GraphCutbased algorithm to account for the slight vertical parallax using the GLC ray geometry. We show that the recovered pseudo disparity map conveys important depth cues analogous to perspective stereo matching. To more accurately reconstruct a 3D scene, we develop a new multiperspective volumetric reconstruction method. We discretize the scene into voxels and apply the GLC backprojections to map the voxel onto each input multiperspective camera. Finally, we apply the graphcut algorithm to optimize the 3D embedded voxel graph. We demonstrate our algorithms on both synthetic and real multiperspective cameras. Experimental results show that our methods are robust and reliable. 1.
Analytical Forward Projection for Axial NonCentral Dioptric & Catadioptric Cameras
"... Abstract. Wepresentatechniqueformodelingnoncentralcatadioptric cameras consisting of a perspective camera and a rotationally symmetric conic reflector. While previous approaches use a central approximation and/or iterative methods for forward projection, we present an analytical solution. This allo ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
(Show Context)
Abstract. Wepresentatechniqueformodelingnoncentralcatadioptric cameras consisting of a perspective camera and a rotationally symmetric conic reflector. While previous approaches use a central approximation and/or iterative methods for forward projection, we present an analytical solution. This allows computation of the optical path from a given 3D point to the given viewpoint by solving a 6 th degree forward projection equation for general conic mirrors. For a spherical mirror, the forward projection reduces to a 4 th degree equation, resulting in a closed form solution. We also derive the forward projection equation for imaging through a refractive sphere (noncentral dioptric camera) and show that it is a 10 th degree equation. While central catadioptric cameras lead to conic epipolar curves, we show the existence of a quartic epipolar curve for catadioptric systems using a spherical mirror. The analytical forward projection leads to accurate and fast 3D reconstruction via bundle adjustment. Simulations and real results on single image sparse 3D reconstruction are presented. We demonstrate ∼ 100 times speed up using the analytical solution over iterative forward projection for 3D reconstruction using spherical mirrors. 1
Polydioptric Camera Design and 3D Motion Estimation
 In Proc. IEEE Conference on Computer Vision and Pattern Recognition, volume II
, 2003
"... Most cameras used in computer vision applications are still based on the pinhole principle inspired by our own eyes. It has been found though that this is not necessarily the optimal image formation principle for processing visual information using a machine. In this paper we describe how to find th ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
Most cameras used in computer vision applications are still based on the pinhole principle inspired by our own eyes. It has been found though that this is not necessarily the optimal image formation principle for processing visual information using a machine. In this paper we describe how to find the optimal camera for 3D motion estimation by analyzing the structure of the space formed by the light rays passing through a volume of space. Every camera corresponds to a sampling pattern in light ray space, thus the question of camera design can be rephrased as finding the optimal sampling pattern with regard to a given task. This framework suggests that large fieldofview multiperspective (polydioptric) cameras are the optimal image sensors for 3D motion estimation. We conclude by proposing design principles for polydioptric cameras and describe an algorithm for such a camera that estimates its 3D motion in a scene independent and robust manner.