• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

K.Akeley: The accumulation buffer: hardware support for high-quality rendering, (1990)

by P Haeberli
Venue:ACM SIGGRAPH Computer Graphics
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 179
Next 10 →

Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments

by Peter-pike Sloan, Jan Kautz, John Snyder - ACM Transactions on Graphics , 2002
"... We present a new, real-time method for rendering diffuse and glossy objects in low-frequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of ..."
Abstract - Cited by 470 (28 self) - Add to MetaCart
We present a new, real-time method for rendering diffuse and glossy objects in low-frequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of arbitrary, low-frequency incident lighting into transferred radiance which includes global effects like shadows and interreflections from the object onto itself. At run-time, these transfer functions are applied to actual incident lighting. Dynamic, local lighting is handled by sampling it close to the object every frame; the object can also be rigidly rotated with respect to the lighting and vice versa. Lighting and transfer functions are represented using low-order spherical harmonics. This avoids aliasing and evaluates efficiently on graphics hardware by reducing the shading integral to a dot product of 9 to 25 element vectors for diffuse receivers. Glossy objects are handled using matrices rather than vectors. We further introduce functions for radiance transfer from a dynamic lighting environment through a preprocessed object to neighboring points in space. These allow soft shadows and caustics from rigidly moving objects to be cast onto arbitrary, dynamic receivers. We demonstrate real-time global lighting effects with this approach.
(Show Context)

Citation Context

...l methods for integrating over large-scale lighting environments [8], including Monte Carlo ray tracing [7][21][25], radiosity [6], or multi-pass rendering that sums over multiple point light sources =-=[17][2-=-7][36], are impractical for real-time rendering. Real-time, realistic global illumination encounters three difficulties – it must model the complex, spatially-varying BRDFs of real materials (BRDF c...

Instant Radiosity

by Alexander Keller , 1997
"... We present a fundamental procedure for instant rendering from the radiance equation. Operating directly on the textured scene description, the very efficient and simple algorithm produces photorealistic images without any finite element kernel or solution discretization of the underlying integral eq ..."
Abstract - Cited by 232 (4 self) - Add to MetaCart
We present a fundamental procedure for instant rendering from the radiance equation. Operating directly on the textured scene description, the very efficient and simple algorithm produces photorealistic images without any finite element kernel or solution discretization of the underlying integral equation. Rendering rates of a few seconds are obtained by exploiting graphics hardware, the deterministic technique of the quasi-random walk for the solution of the global illumination problem, and the new method of jittered low discrepancy sampling.

A non-photorealistic lighting model for automatic technical illustration

by Amy Gooch, Bruce Gooch, Peter Shirley, Elaine Cohen - SIGGRAPH , 1998
"... Phong-shaded 3D imagery does not provide geometric information of the same richness as human-drawn technical illustrations. A non-photorealistic lighting model is presented that attempts to narrow this gap. The model is based on practice in traditional technical illustration, where the lighting mode ..."
Abstract - Cited by 207 (12 self) - Add to MetaCart
Phong-shaded 3D imagery does not provide geometric information of the same richness as human-drawn technical illustrations. A non-photorealistic lighting model is presented that attempts to narrow this gap. The model is based on practice in traditional technical illustration, where the lighting model uses both luminance and changes in hue to indicate surface orientation, reserving extreme lights and darks for edge lines and highlights. The lighting model allows shading to occur only in mid-tones so that edge lines and highlights remain visually prominent. In addition, we show how this lighting model is modified when portraying models of metal objects. These illustration methods give a clearer picture of shape, structure, and material composition than traditional computer graphics methods.
(Show Context)

Citation Context

...This assumes the object color is set to white. We turn off the Phong highlight because the negative blue light causes jarring artifacts. Highlights could be added on systems with accumulation buffers =-=[11]-=-. This approximation is shown compared to traditional Phong shading and the exact model in Figure 11. Like Walter et al., we need different light colors for each object. We could avoid these artifacts...

The VolumePro Real-Time Ray-Casting System

by Hanspeter Pfister, Jan Hardenbergh, Jim Knittel, Hugh Lauer, Larry Seiler , 1999
"... This paper describes VolumePro, the world's first single-chip realtime volume rendering system for consumer PCs. VolumePro implements ray-casting with parallel slice-by-slice processing. Our discussion of the architecture focuses mainly on the rendering pipeline and the memory organization. Vol ..."
Abstract - Cited by 194 (10 self) - Add to MetaCart
This paper describes VolumePro, the world's first single-chip realtime volume rendering system for consumer PCs. VolumePro implements ray-casting with parallel slice-by-slice processing. Our discussion of the architecture focuses mainly on the rendering pipeline and the memory organization. VolumePro has hardware for gradient estimation, classification, and per-sample Phong illumination. The system does not perform any pre-processing and makes parameter adjustments and changes to the volume data immediately visible. We describe several advanced features of VolumePro, such as gradient magnitude modulation of opacity and illumination, supersampling, cropping and cut planes. The system renders 500 million interpolated, Phong illuminated, composited samples per second. This is sufficient to render volumes with up to 16 million voxels (e.g., 256 3 ) at 30 frames per second. CR Categories: B.4.2 [Hardware]: Input/Output and Data Communications---Input/Output DevicesImage display; C.3 [Com...
(Show Context)

Citation Context

...in any dimension), subvolumes, cropping and cut planes. We are not aware of any previous implementation of these features in special-purpose volume rendering hardware. 4.1 Supersampling Supersampling =-=[8]-=- improves the quality of the rendered image by sampling the volume data set at a higher frequency than the voxel 255 spacing. In the case of supersampling in the x and y directions, this would result ...

Dynamically Reparameterized Light Fields

by Aaron Isaksen, Leonard McMillan, Steven J. Gortler , 1999
"... An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph image-based rendering methods and greatl ..."
Abstract - Cited by 187 (9 self) - Add to MetaCart
An exciting new area in computer graphics is the synthesis of novel images with photographic effect from an initial database of reference images. This is the primary theme of imagebased rendering algorithms. This research extends the light field and lumigraph image-based rendering methods and greatly extends their utility, especially in scenes with much depth variation. First, we have added the ability to vary the apparent focus within a light field using intuitive camera-like controls such as a variable aperture and focus ring. As with lumigraphs, we allow for more general and flexible focal surfaces than a typical focal plane. However, this parameterization works independently of scene geometry; we do not need to recover actual or approximate geometry of the scene for focusing. In addition, we present a method for using multiple focal surfaces in a single image rendering process.

Fast Calculation of Soft Shadow Textures Using Convolution

by Cyril Soler, François X. Sillion , 1998
"... The calculation of detailed shadows remains one of the most difficult challenges in computer graphics, especially in the case of extended (linear or area) light sources. This paper introduces a new tool for the calculation of shadows cast by extended light sources. Exact shadows are computed in some ..."
Abstract - Cited by 126 (8 self) - Add to MetaCart
The calculation of detailed shadows remains one of the most difficult challenges in computer graphics, especially in the case of extended (linear or area) light sources. This paper introduces a new tool for the calculation of shadows cast by extended light sources. Exact shadows are computed in some constrained configurations by using a convolution technique, yielding a fast and accurate solution. Approximate shadows can be computed for general configurations by applying the convolution to a representative "ideal" configuration. We analyze the various sources of approximation in the process and derive a hierarchical, error-driven algorithm for fast shadow calculation in arbitrary configurations using a hierarchy of object clusters. The convolution is performed on images rendered in an offscreen buffer and produces a shadow map used as a texture to modulate the unoccluded illumination. Light sources can have any 3D shape as well as arbitrary emission characteristics, while shadow maps can be applied to groups of objects at once. The method can be employed in a hierarchical radiosity system, or directly as a shadowing technique. We demonstrate results for various scenes, showing that soft shadows can be generated at interactive rates for dynamic environments.

The Information Mural: A technique for displaying and navigating large information spaces

by Dean F. Jerding, John T. Stasko - In Proceedings of the IEEE Visualization `95 Symposium on Information Visualization , 1995
"... Information visualizations must allow users to browse information spaces and focus quickly on items of interest. Being able to see some representation of the entire information space provides an initial gestalt overview and gives context to support browsing and search tasks. However, the limited num ..."
Abstract - Cited by 122 (4 self) - Add to MetaCart
Information visualizations must allow users to browse information spaces and focus quickly on items of interest. Being able to see some representation of the entire information space provides an initial gestalt overview and gives context to support browsing and search tasks. However, the limited number of pixels on the screen constrain the information band-width and make it dicult to completely display large information spaces. The Information Mural is a two-dimensional, reduced representation of an entire information space that ts entirely within a display window or screen. The mural creates a miniature version of the in-formation space using visual attributes such as grayscale shading, intensity, color, and pixel size, along with anti-aliased compression techniques. Information Murals can be used as stand-alone visualizations or in global navigational views. We have built several prototypes to demonstrate the use of Information Murals in visualization applications; subject matter for these views includes computer software, scientic data, text documents, and geographic information.
(Show Context)

Citation Context

...e a Mural widget to implement a resizeable global overview. This feature emphasizes the value of the Mural widget versus an application using rendering hardware such as the Open GL accumulation buffer=-=[11]-=- to do anti-aliasing of a scene as it is drawn. 1 Vz is a proprietary cross-platform visualization framework developed by Bell Laboratories, Naperville, IL. 21 Several other parameters of the Mural wi...

Interactive Multi-Pass Programmable Shading

by Mark S. Peercy, et al.
"... Programmable shading is a common technique for production animation, but interactive programmable shading is not yet widely available. We support interactive programmable shading on virtually any 3D graphics hardware using a scene graph library on top of OpenGL. We treat the OpenGL architecture as a ..."
Abstract - Cited by 103 (4 self) - Add to MetaCart
Programmable shading is a common technique for production animation, but interactive programmable shading is not yet widely available. We support interactive programmable shading on virtually any 3D graphics hardware using a scene graph library on top of OpenGL. We treat the OpenGL architecture as a general SIMD computer, and translate the high-level shading description into OpenGL rendering passes. While our system uses OpenGL, the techniques described are applicable to any retained mode interface with appropriate extension mechanisms and hardware API with provisions for recirculating data through the graphics pipeline. We present two demonstrations of the method. The first is a constrained shading language that runs on graphics hardware supporting OpenGL 1.2 with a subset of the ARB imaging extensions. We remove the shading language constraints by minimally extending OpenGL. The key extensions are color range (supporting extended range and precision data types) and pixel texture (using framebuffer values as indices into texture maps). Our second demonstration is a renderer supporting the RenderMan Interface and RenderMan Shading Language on a software implementation of this extended OpenGL. For both languages, our compiler technology can take advantage of extensions and performance characteristics unique to any particular graphics hardware.
(Show Context)

Citation Context

...r also does not currently support high quality pixel antialiasing, motion blur, and depth of field. One could implement all of these through the accumulation buffer as has been demonstrated elsewhere =-=[13]-=-. 5 DISCUSSION We measured the performance of several of our ISL and RenderMan shaders (Table 1). The performance numbers for millions of pixels filled are conservative estimates since we counted all ...

Efficient Image-Based Methods for Rendering Soft Shadows

by Maneesh Agrawala, Ravi Ramamoorthi, Alan Heirich, Laurent Moll , 2000
"... We present two efficient image-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps and provide the associated benefits. The computation time and memory requirements for adding soft shadows to an image depend on ima ..."
Abstract - Cited by 87 (6 self) - Add to MetaCart
We present two efficient image-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps and provide the associated benefits. The computation time and memory requirements for adding soft shadows to an image depend on image size and the number of lights, not geometric scene complexity. We also show that because area light sources are localized in space, soft shadow computations are particularly well suited to image-based rendering techniques. Our first approach---layered attenuation maps--- achieves interactive rendering rates, but limits sampling flexibility, while our second method---coherence-based raytracing of depth images---is not interactive, but removes the limitations on sampling and yields high quality images at a fraction of the cost of conventional raytracers. Combining the two algorithms allows for rapid previewing followed by efficient high-quality rendering.

Pipeline Rendering: Interaction And Realism Through Hardware-Based Multi-Pass Rendering

by Paul Joseph Diefenbach , 1996
"... ..."
Abstract - Cited by 69 (1 self) - Add to MetaCart
Abstract not found
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University