Results 1 - 10
of
123
3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes
- ACM Transactions on Graphics
, 2004
"... Three-dimensional TV is expected to be the next revolution in the history of television. We implemented a 3D TV prototype system with real-time acquisition, transmission, and 3D display of dynamic scenes. We developed a distributed, scalable architecture to manage the high computation and bandwidth ..."
Abstract
-
Cited by 172 (7 self)
- Add to MetaCart
Three-dimensional TV is expected to be the next revolution in the history of television. We implemented a 3D TV prototype system with real-time acquisition, transmission, and 3D display of dynamic scenes. We developed a distributed, scalable architecture to manage the high computation and bandwidth demands. Our system consists of an array of cameras, clusters of network-connected PCs, and a multi-projector 3D display. Multiple video streams are individually encoded and sent over a broadband network to the display. The 3D display shows high-resolution (1024 × 768) stereoscopic color images for multiple viewpoints without special glasses. We implemented systems with rear-projection and front-projection lenticular screens. In this paper, we provide a detailed overview of our 3D TV system, including an examination of design choices and tradeoffs. We present the calibration and image alignment procedures that are necessary to achieve good image quality. We present qualitative results and some early user feedback. We believe this is the first real-time end-to-end 3D TV system with enough views and resolution to provide a truly immersive 3D experience.
TELE-IMMERSIVE ENVIRONMENTS BY
, 2007
"... Urbana, Illinoissenting the user interest in the application layer. The design of the management framework revolves around the concept of view-aware multi-stream coordination, which leverages the central role of view semantics in 3D free-viewpoint video systems. Second, in the stream differentiation ..."
Abstract
-
Cited by 71 (1 self)
- Add to MetaCart
(Show Context)
Urbana, Illinoissenting the user interest in the application layer. The design of the management framework revolves around the concept of view-aware multi-stream coordination, which leverages the central role of view semantics in 3D free-viewpoint video systems. Second, in the stream differentiation layer we present the design of view to stream mapping, where a subset of relevant streams are selected based on the relative importance of each stream to the current view. Conventional streaming controllers focus on a fixed set of streams specified by the application. Different from all the others, in our management framework the application layer only specifies the view information while the underlying controller dynamically determines the set of streams to be managed. Third, in the stream coordination layer we present two designs applicable in different situations. In the case of end-to-end 3DTI communication, a learning-based controller is embedded which provides bandwidth allocation for relevant streams. In the case of multi-party 3DTI communication, we propose a novel ViewCast protocol to coordinate the multi-stream content dissemination upon an end-system overlay network. Finally, we embed 3DTI session management in the framework which facilitates
Going Beyond the Display: A Surface Technology with an Electronically Switchable Diffuser
"... Figure 1: We present a new rear projection-vision surface technology that augments the typical interactions afforded by multi-touch and tangible tabletops with the ability to project and sense both through and beyond the display. In this example, an image is projected so it appears on the main surfa ..."
Abstract
-
Cited by 51 (3 self)
- Add to MetaCart
(Show Context)
Figure 1: We present a new rear projection-vision surface technology that augments the typical interactions afforded by multi-touch and tangible tabletops with the ability to project and sense both through and beyond the display. In this example, an image is projected so it appears on the main surface (far left). A second image is projected through the display onto a sheet of projection film placed on the surface (middle left). This image is maintained on the film as it is lifted off the main surface (middle right). Finally, our technology allows both projections to appear simultaneously, one displayed on the surface and the other on the film above, with neither image contaminating the other (far right). We introduce a new type of interactive surface technology based on a switchable projection screen which can be made diffuse or clear under electronic control. The screen can be continuously toggled between these two states so quickly that the switching is imperceptible to the human eye. It is then possible to rear-project what is perceived as a stable image onto the display surface, when the screen is in fact transparent for half the time. The clear periods may be used
Embedding Imperceptible Patterns into Projected Images for Simultaneous Acquisition and Display
, 2004
"... We introduce a method to imperceptibly embed arbitrary binary patterns into ordinary color images displayed by unmodified off-the-shelf Digital Light Processing (DLP) projectors. The encoded images are visible only to cameras synchronized with the projectors and exposed for a short interval, while t ..."
Abstract
-
Cited by 51 (6 self)
- Add to MetaCart
(Show Context)
We introduce a method to imperceptibly embed arbitrary binary patterns into ordinary color images displayed by unmodified off-the-shelf Digital Light Processing (DLP) projectors. The encoded images are visible only to cameras synchronized with the projectors and exposed for a short interval, while the original images appear only minimally degraded to the human eye. To achieve this goal, we analyze and exploit the micro-mirror modulation pattern used by the projection technology to generate intensity levels for each pixel and color channel. Our real-time embedding process maps the user’s original color image values to the nearest values whose camera-perceived intensities are the ones desired by the binary image to be embedded. The color differences caused by this mapping process are compensated by error-diffusion dithering. The non-intrusive nature of our novel approach allows simultaneous (immersive) display and acquisition under controlled lighting conditions, as defined on a pixel level by the binary patterns. We therefore introduce structured light techniques into human-inhabited mixed and augmented reality environments, where they previously often were too intrusive. 1.
Camera-Based Calibration Techniques for Seamless Multiprojector Displays
- IEEE Trans. On Visualization and Computer Graphics
, 2005
"... Abstract — Multi-projector, large-scale displays are used in scientific visualization, virtual reality and other visually intensive applications. In recent years, a number of camera-based computer vision techniques have been proposed to register the geometry and color of tiled projectionbased displa ..."
Abstract
-
Cited by 38 (3 self)
- Add to MetaCart
(Show Context)
Abstract — Multi-projector, large-scale displays are used in scientific visualization, virtual reality and other visually intensive applications. In recent years, a number of camera-based computer vision techniques have been proposed to register the geometry and color of tiled projectionbased display. These automated techniques use cameras to “calibrate ” display geometry and photometry, computing per-projector corrective warps and intensity corrections that are necessary to produce seamless imagery across projector mosaics. These techniques replace the traditional labor-intensive manual alignment and maintenance steps, making such displays cost-effective, flexible, and accessible. In this paper, we present a survey of different camerabased geometric and photometric registration techniques reported in the literature to date. We discuss several techniques that have been proposed and demonstrated, each addressing particular display configurations and modes of operation. We overview each of these approaches and discuss their advantages and disadvantages. We examine techniques that address registration on both planar (video walls) and arbitrary display surfaces and photometric correction for different kinds of display surfaces. We conclude with a discussion of the remaining challenges and research opportunities for multi-projector displays.
Seing people in different light: Joint shape, motion and reflectance capture
- IEEE TVCG
"... Abstract—By means of passive optical motion capture, real people can be authentically animated and photo-realistically textured. To import real-world characters into virtual environments, however, surface reflectance properties must also be known. We describe a video-based modeling approach that cap ..."
Abstract
-
Cited by 37 (8 self)
- Add to MetaCart
(Show Context)
Abstract—By means of passive optical motion capture, real people can be authentically animated and photo-realistically textured. To import real-world characters into virtual environments, however, surface reflectance properties must also be known. We describe a video-based modeling approach that captures human shape and motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying surface reflectance properties of clothes from multiview video footage. The resulting model description enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g., for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware. Index Terms—3D video, dynamic reflectometry, real-time rendering, relighting. Ç 1
Encumbrance-free telepresence system with realtime 3D capture and display using commodity depth cameras
- In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR
, 2011
"... virtual object (circuit board) is incorporated into a live 3D capture session and appropriately occludes real objects. This paper introduces a proof-of-concept telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without r ..."
Abstract
-
Cited by 31 (1 self)
- Add to MetaCart
(Show Context)
virtual object (circuit board) is incorporated into a live 3D capture session and appropriately occludes real objects. This paper introduces a proof-of-concept telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft KinectTMcolor-plus-depth cameras. Novel contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solu-tion to the problem of interference that occurs between Kinect cam-eras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 100 million triangles/sec on a single PC and graphics board. Also presented is a Kinect-based marker-less tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Our system is afford-able and reproducible, offering the opportunity to easily deliver 3D telepresence beyond the researcher’s lab.
A Flexible Projector-Camera System for Multi-Planar Displays
- In Proceedings of Computer Vision and Pattern Recognition
, 2003
"... We present a novel multi-planar display system based on an uncalibrated projector-camera pair. Our system exploits the juxtaposition of planar surfaces in a room to create adhoc visualization and display capabilities. In an office setting, for example, a desk pushed against a wall provides two perp ..."
Abstract
-
Cited by 30 (4 self)
- Add to MetaCart
(Show Context)
We present a novel multi-planar display system based on an uncalibrated projector-camera pair. Our system exploits the juxtaposition of planar surfaces in a room to create adhoc visualization and display capabilities. In an office setting, for example, a desk pushed against a wall provides two perpendicular surfaces that can simultaneously display elevation and plan views of an architectural model. In contrast to previous room-level projector-camera systems, our method is based on a flexible, minimalist calibration procedure which is tailored to the geometry of the multiplanar surface scenario. Our procedure makes it possible to quickly auto-calibrate the display with a minimum of effort on the part of the user. A number of display configurations can be created on any available planar surfaces using a single commodity projector and camera. The key to our calibration approach is an efficient technique for simultaneously localizing multiple planes and a robust planar metric rectification method which can tolerate a restricted camera field-of-view and requires no special calibration objects. We demonstrate the robustness of our calibration method using real and synthetic images and present several applications of our display system.
Shield fields: modeling and capturing 3D occluders
- ACM TRANS. GRAPH
"... We describe a unified representation of occluders in light transport and photography using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. Our key theoretical result is that shield fields can be used to decouple the effects of occluders and incident ..."
Abstract
-
Cited by 29 (9 self)
- Add to MetaCart
We describe a unified representation of occluders in light transport and photography using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. Our key theoretical result is that shield fields can be used to decouple the effects of occluders and incident illumination. We first describe the properties of shield fields in the frequency-domain and briefly analyze the “forward ” problem of efficiently computing cast shadows. Afterwards, we apply the shield field signal-processing framework to make several new observations regarding the “inverse” problem of reconstructing 3D occluders from cast shadows – extending previous work on shape-from-silhouette and visual hull methods. From this analysis we develop the first single-camera, single-shot approach to capture visual hulls without requiring moving or programmable illumination. We analyze several competing camera designs, ultimately leading to the development of a new large-format, mask-based light field camera that exploits optimal tiled-broadband codes for light-efficient shield field capture. We conclude by presenting a detailed experimental analysis of shield field capture and 3D occluder reconstruction.