Results 11  20
of
37
Sample based visibility for soft shadows using aliasfree shadow maps
 Proceedings of the Eurographics Symposium on Rendering
, 2008
"... This paper introduces an accurate realtime soft shadow algorithm that uses sample based visibility. Initially, we present a GPUbased aliasfree hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper introduces an accurate realtime soft shadow algorithm that uses sample based visibility. Initially, we present a GPUbased aliasfree hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The method is extended to soft shadow sampling for an arbitrarily shaped area/volumetric light source using 1281024 light samples per screen pixel. The aliasfree shadow map guarantees that the visibility is accurately sampled per screenspace pixel, even for arbitrarily shaped (e.g. nonplanar) surfaces or solid objects. Another contribution is a smooth coherent shading model to avoid common light leakage near shadow borders due to normal interpolation. Categories and Subject Descriptors (according to ACM CCS):
Realtime Rendering of Dynamic Objects in Dynamic, Lowfrequency Lighting Environments
 Proc. of Computer Animation and Social Agents04, 2004
"... This paper presents a precomputation based method for real time global illumination of dynamic objects. Each frame of animation is rendered using spherical harmonics lighting basis functions. The precomputed radiance transfer (PRT) associated with each object’s surface is unfolded to a rectangular ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper presents a precomputation based method for real time global illumination of dynamic objects. Each frame of animation is rendered using spherical harmonics lighting basis functions. The precomputed radiance transfer (PRT) associated with each object’s surface is unfolded to a rectangular light map. A sequence of light maps is compressed using a high dynamic range video compression technique, and uncompressed for realtime rendering. During rendering, we fetch the light map corresponding to each frame, and compose a light map corresponding to any arbitrary, lowfrequency lighting condition. The computed surface light map can be applied to the object using the texture mapping facility of a graphics pipeline. The primary contribution of this paper lies in its precomputation based real time global illumination rendering of dynamic objects. Spherical harmonics light maps (SHLM) are used to represent the precomputation results, and the animation can be viewed from arbitrary viewpoints and in arbitrary lowfrequency environment lighting in real time. The consequence is an algorithm that is capable of high quality rendering of animated characters in realtime.
Efficient Shadows for Sampled Environment Maps
"... This paper addresses the problem of efficiently calculating shadows from environment maps in the context of raytracing. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper addresses the problem of efficiently calculating shadows from environment maps in the context of raytracing. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction. We show that coherence in both spatial and angular domains can be used to reduce the number of shadowrays that need to be traced. Specifically, we use a coarsetofine evaluation of the image, predicting visibility by reusing visibility calculations from 4 nearby pixels that have already been evaluated. This simple method allows us to explicitly mark regions of uncertainty in the prediction. By only tracing rays in these and neighboring directions, we are able to reduce the number of shadowrays traced by up to a factor of 20 while maintaining error rates below 0.01%. For many scenes, our algorithm can add shadowing from hundreds of lights at only twice the cost of rendering without shadows. Sample source code is available online. 60 million shadow rays standard raytracing approximately equal quality 6 million shadow rays our method approximately equal work 7 million shadow rays standard raytracing Figure 1: A scene illuminated by a sampled environment map. The left image is rendered in POVRay using shadowray tracing to determine lightsource visibility for the 400 lights in the scene, as sampled from the environment according to [Agarwal et al. 2003]. The center image uses our CoherenceBased Sampling to render the same scene with a 90 % reduction in shadowrays traced. The right image is again traced in POV Ray, but with a reduced sampling of the environment map (50 lights, again using [Agarwal et al. 2003]) to approximate the number of shadowrays traced using our method. Note that the lower sampling of the environment map in the right image does not faithfully reproduce the soft shadows. 1
SUREbased optimization for adaptive sampling and reconstruction
 ACM Transactions on Graphics (SIGGRAPH Asia
, 2011
"... Figure 1: Comparisons between greedy error minimization (GEM) [Rousselle et al. 2011] and our SUREbased filtering. With SURE, we are able to use kernels (cross bilateral filters in this case) that are more effective than GEM’s isotropic Gassians. Thus, our approach better adapts to anisotropic feat ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Figure 1: Comparisons between greedy error minimization (GEM) [Rousselle et al. 2011] and our SUREbased filtering. With SURE, we are able to use kernels (cross bilateral filters in this case) that are more effective than GEM’s isotropic Gassians. Thus, our approach better adapts to anisotropic features (such as the motion blur pattern due to the motion of the airplane) and preserves scene details (such as the textures on the floor and curtains). The kernels of both methods are visualized for comparison. We apply Stein’s Unbiased Risk Estimator (SURE) to adaptive sampling and reconstruction to reduce noise in Monte Carlo rendering. SURE is a general unbiased estimator for mean squared error (MSE) in statistics. With SURE, we are able to estimate error for an arbitrary reconstruction kernel, enabling us to use more effective kernels rather than being restricted to the symmetric ones used in previous work. It also allows us to allocate more samples to areas with higher estimated MSE. Adaptive sampling and reconstruction can therefore be processed within an optimization framework. We also propose an efficient and memoryfriendly approach to reduce the impact of noisy geometry features where there is depth of field or motion blur. Experiments show that our method produces images with less noise and crisper details than previous methods.
RealTime Rendering of Textures with Feature Curves
"... The standard bilinear interpolation on normal maps results in visual artifacts along sharp features, which are common for surfaces with creases, wrinkles, and dents. In many cases, spatially varying features, like the normals near discontinuity curves, are best represented as functions of the distan ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The standard bilinear interpolation on normal maps results in visual artifacts along sharp features, which are common for surfaces with creases, wrinkles, and dents. In many cases, spatially varying features, like the normals near discontinuity curves, are best represented as functions of the distance to the curve and the position along the curve. For highquality interactive rendering at arbitrary magnifications, one needs to interpolate the distance field preserving discontinuity curves exactly. We present a realtime, GPUbased method for distance function and distance gradient interpolation which preserves discontinuity feature curves. The feature curves are represented by a set of quadratic Bezier curves, with minimal restrictions on their intersections. We demonstrate how this technique can be used for realtime rendering of complex feature patterns and blending normal maps with procedurally defined profiles near normal discontinuities. 3
Efficient PhysicallyBased Shadow Algorithms
, 2006
"... This research focuses on developing efficient algorithms for computing shadows in computergenerated images. A distinctive feature of the shadow algorithms presented in this thesis is that they produce correct, physicallybased results, instead of giving approximations whose quality is often hard t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This research focuses on developing efficient algorithms for computing shadows in computergenerated images. A distinctive feature of the shadow algorithms presented in this thesis is that they produce correct, physicallybased results, instead of giving approximations whose quality is often hard to ensure or evaluate.
Light sources that are modeled as points without any spatial extent produce hard shadows with sharp boundaries. Shadow mapping is a traditional method for rendering such shadows. A shadow map is a depth buffer computed from the scene, using a point light source as the viewpoint. The finite resolution of the shadow map requires that its contents are resampled when determining the shadows on visible surfaces. This causes various artifacts such as incorrect selfshadowing and jagged shadow boundaries. A novel method is presented that avoids the resampling step, and provides exact shadows for every point visible in the image.
The shadow volume algorithm is another commonly used algorithm for realtime rendering of hard shadows. This algorithm gives exact results and does not suffer from any resampling problems, but it tends to consume a lot of fillrate, which leads to performance problems. This thesis presents a new technique for locally choosing between two previous shadow volume algorithms with different performance characteristics. A simple criterion for making the local choices is shown to yield better performance than using either of the algorithms alone.
Light sources with nonzero spatial extent give rise to soft shadows with smooth boundaries. A novel method is presented that transposes the classical processing order for soft shadow computation in offline rendering. Instead of casting shadow rays, the algorithm first conceptually collects every ray that would need to be cast, and then processes the shadowcasting primitives one by one, hierarchically finding the rays that are blocked.
Another new soft shadow algorithm takes a different point of view into computing the shadows. Only the silhouettes of the shadow casters are used for determining the shadows, and an unintrusive execution model makes the algorithm practical for production use in offline rendering.
The proposed techniques accelerate the computing of physicallybased shadows in realtime and offline rendering. These improvements make it possible to use correct, physicallybased shadows in a broad range of scenes that previous methods cannot handle efficiently enough.
This thesis consists of an overview and of the following 5 publications:
1. T. Aila and S. Laine. AliasFree Shadow Maps. In Rendering Techniques 2004 (Eurographics Symposium on Rendering), pages 161166. Eurographics Association, 2004.
2. S. Laine and T. Aila. Hierarchical Penumbra Casting. Computer Graphics Forum, 24 (3): 313322, 2005.
3. S. Laine. SplitPlane Shadow Volumes. In Graphics Hardware 2005 (Eurographics Symposium Proceedings), pages 2332. Eurographics Association, 2005.
4. S. Laine, T. Aila, U. Assarsson, J. Lehtinen and T. AkenineMöller. Soft Shadow Volumes for Ray Tracing. ACM Transactions on Graphics, 24 (3): 11561165, 2005.
5. J. Lehtinen, S. Laine and T. Aila. An Improved PhysicallyBased Soft Shadow Volume Algorithm. Computer Graphics Forum, 25 (3): 303312, 2006.
K.: Adaptive records for irradiance caching
 Comput. Graph. Forum
, 2011
"... Irradiance Caching is one of the most widely used algorithms to speed up global illumination. In this paper, we propose an algorithm based on the Irradiance Caching scheme that allows us (1) to adjust the density of cached records according to illumination changes and (2) to efficiently render the h ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Irradiance Caching is one of the most widely used algorithms to speed up global illumination. In this paper, we propose an algorithm based on the Irradiance Caching scheme that allows us (1) to adjust the density of cached records according to illumination changes and (2) to efficiently render the highfrequency illumination changes. To achieve this, a new record footprint is presented. Although the original method uses records having circular footprints depending only on geometrical features, our record footprints have a more complex shape which accounts for both geometry and irradiance variations. Irradiance values are computed using a classical Monte Carlo ray tracing method that simplifies the determination of nearby objects and the precomputation of the shape of the influence zone of the current record. By gathering irradiance due to all the incident rays, illumination changes are evaluated to adjust the footprint’s records. As a consequence, the record footprints are smaller where illumination gradients are high. With this technique, the record density depends on the irradiance variations. Strong variations of irradiance (due to direct contributions for example) can be handled and evaluated accurately. Caching direct illumination is of high importance, especially in the case of scenes having many light sources with complex geometry as well as surfaces exposed to daylight. Recomputing direct illumination for the whole image can be very timeconsuming, especially for walkthrough animation rendering or for highresolution
%% [ ProductName: ESP Ghostscript]%% 1 Efficient Shadows from Sampled Environment Maps
"... Abstract This paper addresses the problem of efficiently calculating shadows from environment maps. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction, such as by raytraci ..."
Abstract
 Add to MetaCart
Abstract This paper addresses the problem of efficiently calculating shadows from environment maps. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction, such as by raytracing. We show that coherence in both spatial and angular domains can be used to reduce the number of shadow rays that need to be traced. Specifically, we use a coarsetofine evaluation of the image, predicting visibility by reusing visibility calculations from four nearby pixels that have already been evaluated. This simple method allows us to explicitly mark regions of uncertainty in the prediction. By only tracing rays in these and neighboring d irections, we are able to reduce the number of shadow rays traced by up to a factor of 20 while maintaining error rates below 0.01%. For many scenes, our algorithm can add shadowing from hundreds of lights at twice the cost of rendering without shadows.
GeometryAware Framebuffer Level of Detail
"... This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. T ..."
Abstract
 Add to MetaCart
This paper introduces a framebuffer level of detail algorithm for controlling the pixel workload in an interactive rendering application. Our basic strategy is to evaluate the shading in a low resolution buffer and, in a second rendering pass, resample this buffer at the desired screen resolution. The size of the lower resolution buffer provides a tradeoff between rendering time and the level of detail in the final shading. In order to reduce approximation error we use a featurepreserving reconstruction technique that more faithfully approximates the shading near depth and normal discontinuities. We also demonstrate how intermediate components of the shading can be selectively resized to provide finergrained control over resource allocation. Finally, we introduce a simple control mechanism that continuously adjusts the amount of resizing necessary to maintain a target framerate. These techniques do not require any preprocessing, are straightforward to implement on modern GPUs, and are shown to provide significant performance gains for several pixelbound scenes. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation 1. Introduction and Related
Tensor Clustering for Rendering ManyLight Animations
"... Images from animations rendered with our algorithm, with environment illumination and multiplebounce indirect lighting converted into 65,536 lights. By sparsely sampling the lightsurface interactions and amortizing over time, we can render each frame in a few seconds, using only 300500 GPU shadow ..."
Abstract
 Add to MetaCart
Images from animations rendered with our algorithm, with environment illumination and multiplebounce indirect lighting converted into 65,536 lights. By sparsely sampling the lightsurface interactions and amortizing over time, we can render each frame in a few seconds, using only 300500 GPU shadow map evaluations per frame. Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a longstanding challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated manylight problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animation.