Results 1  10
of
26
Plenoptic sampling
 In SIGGRAPH
, 2000
"... This paper studies the problem of plenoptic sampling in imagebased rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support ..."
Abstract

Cited by 249 (15 self)
 Add to MetaCart
(Show Context)
This paper studies the problem of plenoptic sampling in imagebased rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. The minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve antialiased light field rendering. Plenoptic sampling goes beyond the minimum number of images needed for antialiased light field rendering. More significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. The minimum sampling curve quantitatively describes the relationship among three key elements in IBR systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. Therefore, plenoptic sampling bridges the gap between imagebased rendering and traditional geometrybased rendering. Experimental results demonstrate the effectiveness of our approach.
Mosaicbased 3D scene representation and rendering
 the IEEE Eleventh InternationalConference on Image Processing
, 2005
"... Abstract In this paper we address the problem of fusing images from many video cameras or a moving video camera. The captured images have obvious motion parallax, but they will be aligned and integrated into a few mosaics with a large fieldofview (FOV) that preserve 3D information. We have develop ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract In this paper we address the problem of fusing images from many video cameras or a moving video camera. The captured images have obvious motion parallax, but they will be aligned and integrated into a few mosaics with a large fieldofview (FOV) that preserve 3D information. We have developed a geometric representation that can reorganize the original perspective images into a set of parallel projections with different oblique viewing angles. In addition to providing a wide field of view, mosaics with various oblique views well represent occlusion regions that cannot be seen in a usual nadir view. Stereo pair(s) can be formed from a pair of mosaics with different oblique viewing angles and thus imagebased 3D viewing can be achieved. This representation can be used as both an advanced video interface and a preprocessing step for 3D reconstruction.
Dynamic 3D urban scene modeling using multiple pushbroom mosaics
 the Third International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT 2006), UniversityofNorth
, 2006
"... In this paper, a unified, segmentationbased approach is proposed to deal with both stereo reconstruction and moving objects detection problems using multiple stereo mosaics. Each set of parallelperspective (pushbroom) stereo mosaics is generated from a video sequence captured by a single video cam ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
In this paper, a unified, segmentationbased approach is proposed to deal with both stereo reconstruction and moving objects detection problems using multiple stereo mosaics. Each set of parallelperspective (pushbroom) stereo mosaics is generated from a video sequence captured by a single video camera. First a colorsegmentation approach is used to extract the socalled natural matching primitives from a reference view of a pair of stereo mosaics to facilitate both 3D reconstruction of textureless urban scenes and manmade moving targets (e.g. vehicles). Multiple pairs of stereo mosaics are used to improve the accuracy and robustness in 3D recovery and occlusion handling. Moving targets are detected by inspecting their 3D anomalies, either violating the epipolar geometry of the pushbroom stereo or exhibiting abnormal 3D structure. Experimental results on both simulated and real video sequences are provided to show the effectiveness of our approach. 1.
Omnidirectional Stereo Vision
 Proc. of ICAR’01
, 2001
"... This paper discusses several interesting configurations of omnidirectional stereo (omnistereo) binocular omnistereo, Nocular omnistereo, circular projection omnistereo and dynamic omnistereo. An omnidirectional image can be either obtained by an omnidirectional camera or generated by image mosaic ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
This paper discusses several interesting configurations of omnidirectional stereo (omnistereo) binocular omnistereo, Nocular omnistereo, circular projection omnistereo and dynamic omnistereo. An omnidirectional image can be either obtained by an omnidirectional camera or generated by image mosaicing. Usually an omnidirectional image has a 360degree view around a viewpoint, and in its most common form, can be presented in a cylindrical surface around the viewpoint. This paper shows that an omnidirectional image for stereo vision can either have a single viewpoint or multiple viewpoints, and can be either viewercentered or objectcentered. With these generalizations, omnidirectional stereo vision can be extended from a viewercentered binocular / Nocular omnistereo with a few fixed viewpoints to more interesting omnistereo configurations – circular projection omnistereo with many viewpoints in a small region, dynamic omnistereo with a few reconfigurable viewpoints in a large region, and objectcentered omnistereo with many viewpoints distributed in a large region. Important issues on omnidirectional stereo imaging, epipolar geometry, and depth accuracy are discussed and compared. 1.
Error Characteristics of ParallelPerspective Stereo Mosaics
, 2001
"... This paper analyzes different aspects of the error characteristics of parallelperspective stereo mosaics generated from an airborne video camera moving through a complex threedimensional scene. First, we show that theoretically a stereo pair of parallelperspective mosaics is a good representation ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
This paper analyzes different aspects of the error characteristics of parallelperspective stereo mosaics generated from an airborne video camera moving through a complex threedimensional scene. First, we show that theoretically a stereo pair of parallelperspective mosaics is a good representation for an extended scene, and the adaptive baseline inherent to the geometry permits depth accuracy independent of absolute depth. Second, in practice, we have proposed a 3D mosaicing technique PRISM (parallelray interpolation for stereo mosaicing) that uses interframe match to interpolate the camera position between the original exposure centers of video frames taken at discrete spatial steps. By analyzing the errors introduced by a 2D mosaicing method, we explain why the "3D mosaicing" solution is important to the problem of generating smooth and accurate mosaics while preserving stereoscopic information. We further examine whether this ray interpolation step introduces extra errors in depth recover from stereo mosaics by comparing to the typical perspective stereo formulation. Third, the error characteristics of parallel stereo mosaics from cameras with different configurations of focal lengths and image resolutions are analyzed. Results for mosaic construction from aerial video data of real scenes are shown and for 3D reconstruction from these mosaics are given. We conclude that (1) stereo mosaics generated with the PRISM method have significantly less errors in 3D recovery (even if not depth independent) due to the adaptive baseline geometry; and (2) longer focal length is better since stereo matching becomes more accurate. 1.
Plenoptic Video Geometry
 VISUAL COMPUTER
, 2003
"... More and more processing of visual information is nowadays done by computers, but the images captured by conventional cameras are still based on the pinhole principle inspired by our own eyes. This principle though is not necessarily the optimal image formation principle for automated processing of ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
More and more processing of visual information is nowadays done by computers, but the images captured by conventional cameras are still based on the pinhole principle inspired by our own eyes. This principle though is not necessarily the optimal image formation principle for automated processing of visual information. Each camera samples the space of light rays according to some pattern. If we understand the structure of the space formed by the light rays passing through a volume of space, we can determine the camera, or in other words the sampling pattern of light rays, that is optimal with regard to a given task. In this work we analyze the differential structure of the space of timevarying light rays described by the plenoptic function and use this analysis to relate the rigid motion of an imaging device to the derivatives of the plenoptic function. The results can be used to define a hierarchy of camera models with respect to the structure from motion problem and formulate a linear, sceneindependent estimation problem for the rigid motion of the sensor purely in terms of the captured images.
3d measurements in cargo inspection with a gammaray linear pushbroom stereo system
 In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05
, 2005
"... Abstract ..."
(Show Context)
3D and Moving Target Extraction from Dynamic Pushbroom Stereo Mosaics
 IEEE Workshop on Advanced 3D Imaging for Safety and Security (with CVPR’05), June 25, 2005
"... Abstract Our goal is to acquire panoramic mosaic maps with motion tracking information for 3D (moving) ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract Our goal is to acquire panoramic mosaic maps with motion tracking information for 3D (moving)
Realtime video mosaicing with adaptive parameterized warping
 In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume (Demo Program
, 2001
"... This paper briefly describes a system for realtime video mosaic construction using intensitybased registration. One novel aspects of this algorithm is a dynamic selection of warping models, including similarly transforms, affine models, projective models, and quadratic models based on the complexi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper briefly describes a system for realtime video mosaic construction using intensitybased registration. One novel aspects of this algorithm is a dynamic selection of warping models, including similarly transforms, affine models, projective models, and quadratic models based on the complexity of the incoming imagery. A second result is a new method for subsampling the images based on the sensitivity of image pixels in template matching. By exploiting the latter, it is possible to greatly accelerate the calculation. Some real image sequence mosaicing results for different parametric models are shown in this paper. These are all processed in realtime. 1
COMPUTER VISION IN THE SPACE OF LIGHT RAYS: PLENOPTIC VIDEOGEOMETRY AND POLYDIOPTRIC CAMERA DESIGN
, 2004
"... Most of the cameras used in computer vision, computer graphics, and image processing applications are designed to capture images that are similar to the images we see with our eyes. This enables an easy interpretation of the visual information by a human observer. Nowadays though, more and more pro ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Most of the cameras used in computer vision, computer graphics, and image processing applications are designed to capture images that are similar to the images we see with our eyes. This enables an easy interpretation of the visual information by a human observer. Nowadays though, more and more processing of visual information is done by computers. Thus, it is worth questioning if these human inspired “eyes ” are the optimal choice for processing visual information using a machine. In this thesis I will describe how one can study problems in computer vision without reference to a specific camera model by studying the geometry and statistics of the space of light rays that surrounds us. The study of the geometry will allow us to determine all the possible constraints that exist in the visual input and could be utilized if we had a perfect sensor. Since no perfect sensor exists we use signal processing techniques to examine how well the constraints between different sets of light rays can be exploited given a specific camera model. A camera is modeled as a spatiotemporal filter in the space of light rays which lets us express the image formation process in a function approximation framework. This framework then allows us to relate the geometry of the