Results 1 -
5 of
5
Compression and Interpolation of 3D-Stereoscopic and Multi-View Video
, 1997
"... Compression and interpolation each require, given part of an image, or part of a collection or stream of images, being able to predict other parts. Compression is achieved by transmitting part of the imagery along with instructions for predicting the rest of it; of course, the instructions are usual ..."
Abstract
-
Cited by 11 (2 self)
- Add to MetaCart
Compression and interpolation each require, given part of an image, or part of a collection or stream of images, being able to predict other parts. Compression is achieved by transmitting part of the imagery along with instructions for predicting the rest of it; of course, the instructions are usually much shorter than the unsent data. Interpolation is just a matter of predicting part of the way between two extreme images; however, whereas in compression the original image is known at the encoder, and thus the residual can be calculated, compressed, and transmitted, in interpolation the actual intermediate image is not known, so it is not possible to improve the final image quality by adding back the residual image. Practical 3D-video compression methods typically use a system with four modules: (1) coding one of the streams (the main stream) using a conventional method (e.g., MPEG), (2) calculating the disparity map(s) between corresponding points in the main stream and the auxiliary ...
Synthesis of a high-resolution 3D-stereoscopic image pair from a high-resolution monoscopic image and a low-resolution depth map
, 1998
"... We describe a new low-level scheme to achieve high definition 3D-stereoscopy within the bandwidth of the monoscopic HDTV infrastructure. Our method uses a studio quality monoscopic high resolution color camera to generate a transmitted "main stream" view, and a flanking 3D-stereoscopic pai ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
We describe a new low-level scheme to achieve high definition 3D-stereoscopy within the bandwidth of the monoscopic HDTV infrastructure. Our method uses a studio quality monoscopic high resolution color camera to generate a transmitted "main stream" view, and a flanking 3D-stereoscopic pair of low cost, low resolution monochrome camera "outriggers" to generate a depth map of the scene. The depth map is deeply compressed and transmitted as a low bandwidth "auxiliary stream". The two streams are recombined at the receiver to generate a 3D-stereoscopic pair of high resolution color views from the perspectives of the original outriggers. Alternatively, views from two arbitrary perspectives between (and, to a limited extent, beyond) the low resolution monoscopic camera positions can be synthesized to accommodate individual viewer preferences. We describe our algorithms, and the design and outcome of initial experiments. The experiments begin with three NTSC color images, degrade the outer p...
Compression and Interpolation of 3D-Stereoscopic and Multi-View Video
"... Compression and interpolation each require, given part of an image, or part of a collection or stream of images, being able to predict other parts. Compression is achieved by transmitting part of the imagery along with instructions for predicting the rest of it; of course, the instructions are usual ..."
Abstract
- Add to MetaCart
Compression and interpolation each require, given part of an image, or part of a collection or stream of images, being able to predict other parts. Compression is achieved by transmitting part of the imagery along with instructions for predicting the rest of it; of course, the instructions are usually much shorter than the unsent data. Interpolation is just a matter of predicting part of the way between two extreme images; however, whereas in compression the original image is known at the encoder, and thus the residual can be calculated, compressed, and transmitted, in interpolation the actual intermediate image is not known, so it is not possible to improve the final image quality by adding back the residual image. Practical 3D-video compression methods typically use a system with four modules: (1) coding one of the streams (the main stream) using a conventional method (e.g., MPEG), (2) calculating the disparity map(s) between corresponding points in the main stream and the auxiliary stream(s), (3) coding the disparity maps, and (4) coding the residuals. It is natural and usually advantageous to integrate motion compensation with the disparity calculation and coding. The efficient coding and transmission of the residuals is usually the only practical way to handle occlusions, and the ultimate performance of beginning-to-end systems is usually dominated by the cost of this coding. In this paper we summarize the background principles, explain the innovative features of our implementation steps, and provide quantitative measures of component and system performance.
Synthesis of a high-resolution 3D-stereoscopic image pair from a high-resolution monoscopic image and
"... a low-resolution depth map ..."
(Show Context)
STEREOSCOPIC IMAGE SEQUENCE COMPRESSION USING MULTIRESOLUTION AND QUADTREE DECOMPOSITION BASED DISPARITY- AND MOTION-ADAPTIVE SEGMENTATION
, 1996
"... Stereoscopic image display offers a simple and compact means of portraying on 2D screens the relative depth information in a real world scene. In addition to serving as a disambiguating cue, the perception of depth considerably enhances the viewing experience. Typically, more than two views would ha ..."
Abstract
- Add to MetaCart
(Show Context)
Stereoscopic image display offers a simple and compact means of portraying on 2D screens the relative depth information in a real world scene. In addition to serving as a disambiguating cue, the perception of depth considerably enhances the viewing experience. Typically, more than two views would have to be transmitted either to provide the correct perspective to each viewer in a multi-viewer scenario or to provide a single viewer with the feel of “look-around”. This results in a multi-fold increase in bandwidth over the existing monoscopic channel bandwidths. Achieving a significant reduction in the excess bandwidth needed for coding stereoscopic video, over the bandwidth required for independent coding of these multiple views, is the primary objective of this thesis. To this end, we present a framework for stereoscopic image sequence compression that brings together several computationally efficient algorithms in a unified fashion to address critical issues such as, (1) tailoring the excess bandwidth to be commensurate with the demand for stereoscopic video, (2) compatible coding (in terms of quality and technology), (3) scalability of coding efficiency and computational complexity with multiple views, and (4) synthesis of intermediate views to provide motion parallax perception to the viewer.