Results 1  10
of
85
Spacetime stereo: A unifying framework for depth from triangulation
 In CVPR
, 2003
"... IEEE Computer Society Abstract—Depth from triangulation has traditionally been investigated in a number of independent threads of research, with methods such as stereo, laser scanning, and coded structured light considered separately. In this paper, we propose a common framework called spacetime ste ..."
Abstract

Cited by 130 (5 self)
 Add to MetaCart
IEEE Computer Society Abstract—Depth from triangulation has traditionally been investigated in a number of independent threads of research, with methods such as stereo, laser scanning, and coded structured light considered separately. In this paper, we propose a common framework called spacetime stereo that unifies and generalizes many of these previous methods. To show the practical utility of the framework, we develop two new algorithms for depth estimation: depth from unstructured illumination change and depth estimation in dynamic scenes. Based on our analysis, we show that methods derived from the spacetime stereo framework can be used to recover depth in situations in which existing methods perform poorly. Index Terms—Depth from triangulation, stereo, spacetime stereo. 1
Mosaicing New Views: The CrossedSlits Projection
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract—We introduce a new kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the CrossedSlits (XSlits) projection. In this ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a new kind of mosaicing, where the position of the sampling strip varies as a function of the input camera location. The new images that are generated this way correspond to a new projection model defined by two slits, termed here the CrossedSlits (XSlits) projection. In this projection model, every 3D point is projected by a ray defined as the line that passes through that point and intersects the two slits. The intersection of the projection rays with the imaging surface defines the image. XSlits mosaicing provides two benefits. First, the generated mosaics are closer to perspective images than traditional pushbroom mosaics. Second, by simple manipulations of the strip sampling function, we can change the location of one of the virtual slits, providing a virtual walkthrough of a Xslits camera; all this can be done without recovering any 3D geometry and without calibration. A number of examples where we translate the virtual camera and change its orientation are given; the examples demonstrate realistic changes in parallax, reflections, and occlusions. Index Terms—Nonstationary mosaicing, crossedslits projection, pushbroom camera, virtual walkthrough, imagebased rendering. 1
Stereo with Oblique Cameras
, 2001
"... Mosaics acquired by pushbroom cameras, stereo panoramas, omnivergent mosaics, and spherical mosaics can be viewed as images taken by noncentral cameras, i.e. cameras that project along rays that do not all intersect at one point. It has been shown that in order to reduce the correspondence search i ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
Mosaics acquired by pushbroom cameras, stereo panoramas, omnivergent mosaics, and spherical mosaics can be viewed as images taken by noncentral cameras, i.e. cameras that project along rays that do not all intersect at one point. It has been shown that in order to reduce the correspondence search in mosaics to a oneparametric search along curves, the rays of the noncentral cameras have to lie in double ruled epipolar surfaces. In this work, we introduce the oblique stereo geometry, which has nonintersecting double ruled epipolar surfaces. We analyze the conf igurations of mutually oblique rays that see every point in space. We call such conf igurations oblique cameras. We argue that oblique cameras are important because they are the most noncentral cameras among all cameras. We show that oblique cameras, and the corresponding oblique stereo geometry, exist and give an example of a physically realizable oblique stereo geometry.
Understanding camera tradeoffs through a bayesian analysis of light field projections
 MIT CSAIL TR
, 2008
"... Computer vision has traditionally focused on extracting structure, such as depth, from images acquired using thinlens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but in ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
Computer vision has traditionally focused on extracting structure, such as depth, from images acquired using thinlens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies. This paper introduces a unified framework for analyzing computational imaging approaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a prior on light field signals, estimate the original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type and analyze their limitations.
A perspective on distortions
 Conference on Computer Vision and Pattern Recognition
, 2003
"... A framework for analyzing distortions in nonsingle viewpoint imaging systems is presented. Such systems possess loci of viewpoints called caustics. In general, perspective (or undistorted) views cannot be computed from images acquired with such systems without knowing scene structure. Views compute ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
A framework for analyzing distortions in nonsingle viewpoint imaging systems is presented. Such systems possess loci of viewpoints called caustics. In general, perspective (or undistorted) views cannot be computed from images acquired with such systems without knowing scene structure. Views computed without scene structure will exhibit distortions which we call caustic distortions. We first introduce a taxonomy of distortions based on the geometry of imaging systems. Then, we derive a metric to quantify caustic distortions. We present an algorithm to compute minimally distorted views using simple priors on scene structure. These priors are defined as parameterized primitives such as spheres, planes and cylinders with simple uncertainty models for the parameters. To
NonSingle Viewpoint Catadioptric Cameras: Geometry and Analysis
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2001
"... ..."
On the Epipolar Geometry of the CrossedSlits Projection
, 2003
"... The CrossedSlits (XSlits) camera is defined by two nonintersecting slits, which replace the pinhole in the common perspective camera. Each point in space is projected to the image plane by a ray which passes through the point and the two slits. The XSlits projection model includes the pushbroom c ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
The CrossedSlits (XSlits) camera is defined by two nonintersecting slits, which replace the pinhole in the common perspective camera. Each point in space is projected to the image plane by a ray which passes through the point and the two slits. The XSlits projection model includes the pushbroom camera as a special case. In addition, it describes a certain class of panoramic images, which are generated from sequences obtained by translating pinhole cameras. In this paper
Stereo Reconstruction from Multiperspective Panoramas
, 2004
"... A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective pa ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of singleperspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.
The Raxel Imaging Model and RayBased Calibration
, 2005
"... An imaging model provides a mathematical description of correspondence between points in a scene and in an image. The dominant imaging model, perspective projection, has long been used to describe traditional cameras as well as the human eye. We propose an imaging model which is flexible enough to r ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
An imaging model provides a mathematical description of correspondence between points in a scene and in an image. The dominant imaging model, perspective projection, has long been used to describe traditional cameras as well as the human eye. We propose an imaging model which is flexible enough to represent an arbitrary imaging system. For example using this model we can describe systems using fisheye lenses or compound insect eyes, which violate the assumptions of perspective projection. By relaxing the requirements of perspective projection, we give imaging system designers greater freedom to explore systems which meet other requirements such as compact size and wide field of view. We formulate our model by noting that all imaging systems perform a mapping from incoming scene rays to photosensitive elements on the image detector. This mapping can be conveniently described using a set of virtual sensing elements called raxels. Raxels include geometric, radiometric and optical properties. We present a novel ray based calibration method that uses structured light patterns to extract the raxel parameters of an arbitrary imaging system. Experimental results for perspective as well as nonperspective imaging systems are included.