Results 1  10
of
20
Microrendering for scalable, parallel final gathering
 ACM TRANS. GRAPH. (PROC. SIGGRAPH ASIA
"... Recent approaches to global illumination for dynamic scenes achieve interactive frame rates by using coarse approximations to geometry, lighting, or both, which limits scene complexity and rendering quality. Highquality global illumination renderings of complex scenes are still limited to methods ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Recent approaches to global illumination for dynamic scenes achieve interactive frame rates by using coarse approximations to geometry, lighting, or both, which limits scene complexity and rendering quality. Highquality global illumination renderings of complex scenes are still limited to methods based on ray tracing. While conceptually simple, these techniques are computationally expensive. We present an efficient and scalable method to compute global illumination solutions at interactive rates for complex and dynamic scenes. Our method is based on parallel final gathering running entirely on the GPU. At each final gathering location we perform microrendering: we traverse and rasterize a hierarchical pointbased scene representation into an importancewarped microbuffer, which allows for BRDF importance sampling. The final reflected radiance is computed at each gathering location using the microbuffers and is then stored in imagespace. We can trade quality for speed by reducing the sampling rate of the gathering locations in conjunction with bilateral upsampling. We demonstrate the applicability of our method to interactive global illumination, the simulation of multiple indirect bounces, and to final gathering from photon maps.
The State of the Art in Interactive Global Illumination
 COMPUTER GRAPHICS FORUM
"... The interaction of light and matter in the world surrounding us is of striking complexity and beauty. Since the very beginning of computer graphics, adequate modeling of these processes and efficient computation is an intensively studied research topic and still not a solved problem. The inherent c ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The interaction of light and matter in the world surrounding us is of striking complexity and beauty. Since the very beginning of computer graphics, adequate modeling of these processes and efficient computation is an intensively studied research topic and still not a solved problem. The inherent complexity stems from the underlying physical processes as well as the global nature of the interactions that let light travel within a scene. This article reviews the state of the art in interactive global illumination computation, that is, methods that generate an image of a virtual scene in less than one second with an as exact as possible, or plausible, solution to the light transport. Additionally, the theoretical background and attempts to classify the broad field of methods are described. The strengths and weaknesses of different approaches, when applied to the different visual phenomena, arising from light interaction are compared and discussed. Finally, the article concludes by highlighting design patterns for interactive global illumination and a list of open problems.
Realtime Indirect Illumination with Clustered Visibility
"... Figure 1: Onebounce diffuse global illumination rendered at 800×800 pixels for a scene with dynamic geometry (17 k faces) and dynamic lighting at 19.7 fps. Our method uses soft shadows from 30 area lights to efficiently compute the indirect visibility. Visibility computation is often the bottleneck ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Figure 1: Onebounce diffuse global illumination rendered at 800×800 pixels for a scene with dynamic geometry (17 k faces) and dynamic lighting at 19.7 fps. Our method uses soft shadows from 30 area lights to efficiently compute the indirect visibility. Visibility computation is often the bottleneck when rendering indirect illumination. However, recent methods based on instant radiosity have demonstrated that accurate visibility is not required for indirect illumination. To exploit this insight, we cluster a large number of virtual point lights – which represent the indirect illumination when using instant radiosity – into a small number of virtual area lights. This allows us to compute visibility using recent realtime soft shadow algorithms. Such approximate and fractional fromarea visibility is faster to compute and avoids banding when compared to exact binary frompoint visibility. Our results show, that the perceptual error of this approximation is negligible and that we achieve realtime framerates for large and dynamic scenes. 1
Stereo light probe
 Computer Graphics Forum
"... Figure 1: (Left) Three models of the Michelangelo’s David illuminated with a spatiallyvarying lighting environment captured with the proposed Stereo Light Probe device. (Right) The same scene with a lighting environment captured with the classical approach of single reflective ball. In this paper w ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Figure 1: (Left) Three models of the Michelangelo’s David illuminated with a spatiallyvarying lighting environment captured with the proposed Stereo Light Probe device. (Right) The same scene with a lighting environment captured with the classical approach of single reflective ball. In this paper we present a practical, simple and robust method to acquire the spatiallyvarying illumination of a realworld scene. The basic idea of the proposed method is to acquire the radiance distribution of the scene using highdynamic range images of two reflective balls. The use of two light probes instead of a single one allows to estimate, not only the direction and intensity of the light sources, but also the actual position in space of the light sources. To robustly achieve this goal we first rectify the two input spherical images, then, using a regionbased stereo matching algorithm, we establish correspondences and compute the position of each light. The radiance distribution so obtained can be used for augmented reality applications, photorealistic rendering and accurate reflectance properties estimation. The accuracy and the effectiveness of the method have been tested by measuring the computed light position and rendering synthetic version of a real object in the same scene. The comparison with standard method that uses a simple spherical lighting environment is also shown.
Interactive Global Illumination Based on Coherent Surface Shadow Maps
"... Figure 1: We demonstrate nbounce diffuse global illumination with a final glossy bounce including high frequency surface attributes like bump maps where objects can be moved at interactive rates. Left and middle: Perpixel lighting with n = 2 at 640×480. Right: High frequency normal maps with n = 1 ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Figure 1: We demonstrate nbounce diffuse global illumination with a final glossy bounce including high frequency surface attributes like bump maps where objects can be moved at interactive rates. Left and middle: Perpixel lighting with n = 2 at 640×480. Right: High frequency normal maps with n = 1 at 1024×768. The framerate is 1.4 fps, 1.3 fps and 0.4 fps. Interactive rendering of global illumination effects is a challenging problem. While precomputed radiance transfer (PRT) is able to render such effects in real time the geometry is generally assumed static. This work proposes to replace the precomputed lighting response used in PRT by precomputed depth. Precomputing depth has the same cost as precomputing visibility, but allows visibility tests for moving objects at runtime using simple shadow mapping. For this purpose, a compression scheme for a high number of coherent surface shadow maps (CSSMs) covering the entire scene surface is developed. CSSMs allow visibility tests between all surface points against all points in the scene. We demonstrate the effectiveness of CSSMbased visibility using a novel combination of the lightcuts algorithm and hierarchical radiosity, which can be efficiently implemented on the GPU. We demonstrate interactive nbounce diffuse global illumination, with a final glossy bounce and many high frequency effects: general BRDFs, texture and normal maps, and local or distant lighting of arbitrary shape and distribution – all evaluated perpixel. Furthermore, all parameters can vary freely over time – the only requirement is rigid geometry.
5D Covariance Tracing for Efficient Defocus and Motion Blur
, 2013
"... The rendering of effects such as motion blur and depthoffield requires costly 5D integrals. We accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The rendering of effects such as motion blur and depthoffield requires costly 5D integrals. We accelerate their computation through adaptive sampling and reconstruction based on the prediction of the anisotropy and bandwidth of the integrand. For this, we develop a new frequency analysis of the 5D temporal lightfield, and show that firstorder motion can be handled through simple changes of coordinates in 5D. We further introduce a compact representation of the spectrum using the covariance matrix and Gaussian approximations. We derive update equations for the 5 × 5covariance matrices for each atomic light transport event, such as transport, occlusion, BRDF, texture, lens, and motion. The focus on atomic operations makes our work general, and removes the need for specialcase formulas. We present a new rendering algorithm that computes 5D covariance matrices on the image plane by tracing paths through the scene, focusing on the singlebounce case. This allows us to reduce sampling rates when appropriate and perform reconstruction of images with complex depthoffield and motion blur effects.
Characteristic Point Maps
"... Extremely dense spatial sampling is often needed to prevent aliasing when rendering objects with high frequency variations in geometry and reflectance. To accelerate the rendering process, we introduce characteristic point maps (CPMs), a hierarchy of viewindependent points, which are chosen to pres ..."
Abstract
 Add to MetaCart
(Show Context)
Extremely dense spatial sampling is often needed to prevent aliasing when rendering objects with high frequency variations in geometry and reflectance. To accelerate the rendering process, we introduce characteristic point maps (CPMs), a hierarchy of viewindependent points, which are chosen to preserve the appearance of the original model across different scales. In preprocessing, randomized matrix column sampling is used to reduce an initial dense sampling to a minimum number of characteristic points with associated weights. In rendering, the reflected radiance is computed using a weighted average of reflectances from characteristic points. Unlike existing techniques, our approach requires no restrictions on the original geometry or reflectance functions.
Interactive Indirect Illumination Using Voxel Cone Tracing
"... Figure 1: Realtime indirect illumination (2570 fps on a GTX480): We rely on a voxelbased cone tracing to ensure efficient integration of 2bounce illumination and support diffuse and glossy materials on complex scenes. (Right scene courtesy of G. M. Leal Llaguno) Indirect illumination is an impor ..."
Abstract
 Add to MetaCart
(Show Context)
Figure 1: Realtime indirect illumination (2570 fps on a GTX480): We rely on a voxelbased cone tracing to ensure efficient integration of 2bounce illumination and support diffuse and glossy materials on complex scenes. (Right scene courtesy of G. M. Leal Llaguno) Indirect illumination is an important element for realistic image synthesis, but its computation is expensive and highly dependent on the complexity of the scene and of the BRDF of the involved surfaces. While offline computation and prebaking can be acceptable for some cases, many applications (games, simulators, etc.) require realtime or interactive approaches to evaluate indirect illumination. We present a novel algorithm to compute indirect lighting in realtime that avoids costly precomputation steps and is not restricted to lowfrequency illumination. It is based on a hierarchical voxel octree representation generated and updated on the fly from a regular scene mesh coupled with an approximate voxel cone tracing that allows for a fast estimation of the visibility and incoming energy. Our approach can manage two light bounces for both Lambertian and glossy materials at interactive framerates (2570FPS). It exhibits an almost sceneindependent performance and can handle complex scenes with dynamic content thanks to an interactive octreevoxelization scheme. In addition, we demonstrate that our voxel cone tracing can be used to efficiently estimate Ambient Occlusion. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism—Color, shading, shadowing, and texture
1 Introduction – Problem Statements and Models
"... Matrix factorization is an important and unifying topic in signal processing and linear algebra, which has found numerous applications in many other areas. This chapter introduces basic linear and multilinear 1 models for matrix and tensor factorizations and decompositions, and formulates the analy ..."
Abstract
 Add to MetaCart
(Show Context)
Matrix factorization is an important and unifying topic in signal processing and linear algebra, which has found numerous applications in many other areas. This chapter introduces basic linear and multilinear 1 models for matrix and tensor factorizations and decompositions, and formulates the analysis framework for
Approximate Bias Compensation for Rendering Scenes with Heterogeneous Participating Media
"... Figure 1: Our approximate bias compensation can be used in complex environments to recover the energy loss due to clamping the contribution of VPLs. In the Crytek Sponza the clamped volumetric and surface illumination was rendered in 39 minutes (using 118k VPLs), while the missing energy was recover ..."
Abstract
 Add to MetaCart
(Show Context)
Figure 1: Our approximate bias compensation can be used in complex environments to recover the energy loss due to clamping the contribution of VPLs. In the Crytek Sponza the clamped volumetric and surface illumination was rendered in 39 minutes (using 118k VPLs), while the missing energy was recovered using a twobounce ABC in only 13 minutes. In this paper we present a novel method for highquality rendering of scenes with participating media. Our technique is based on instant radiosity, which is used to approximate indirect illumination between surfaces by gathering light from a set of virtual point lights (VPLs). It has been shown that this principle can be applied to participating media as well, so that the combined single scattering contribution of VPLs within the medium yields full multiple scattering. As in the surface case, VPL methods for participating media are prone to singularities, which appear as bright “splotches ” in the image. These artifacts are usually countered by clamping the VPLs’ contribution, but this leads to energy loss within the shortdistance light transport. Bias compensation recovers the missing energy, but previous approaches are prohibitively costly. We investigate VPLbased methods for rendering scenes with participating media, and propose a novel and efficient approximate bias compensation technique. We evaluate our technique using various test scenes, showing it to be visually indistinguishable from ground truth.