Results 1  10
of
231
Precomputed Radiance Transfer for RealTime Rendering in Dynamic, LowFrequency Lighting Environments
 ACM Transactions on Graphics
, 2002
"... We present a new, realtime method for rendering diffuse and glossy objects in lowfrequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of arbi ..."
Abstract

Cited by 353 (23 self)
 Add to MetaCart
We present a new, realtime method for rendering diffuse and glossy objects in lowfrequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of arbitrary, lowfrequency incident lighting into transferred radiance which includes global effects like shadows and interreflections from the object onto itself. At runtime, these transfer functions are applied to actual incident lighting. Dynamic, local lighting is handled by sampling it close to the object every frame; the object can also be rigidly rotated with respect to the lighting and vice versa. Lighting and transfer functions are represented using loworder spherical harmonics. This avoids aliasing and evaluates efficiently on graphics hardware by reducing the shading integral to a dot product of 9 to 25 element vectors for diffuse receivers. Glossy objects are handled using matrices rather than vectors. We further introduce functions for radiance transfer from a dynamic lighting environment through a preprocessed object to neighboring points in space. These allow soft shadows and caustics from rigidly moving objects to be cast onto arbitrary, dynamic receivers. We demonstrate realtime global lighting effects with this approach.
A Practical Model for Subsurface Light Transport
, 2001
"... This paper introduces a simple model for subsurface light transport in translucent materials. The model enables efficient simulation of effects that BRDF models cannot capture, such as color bleeding within materials and diffusion of light across shadow boundaries. The technique is efficient even fo ..."
Abstract

Cited by 231 (20 self)
 Add to MetaCart
This paper introduces a simple model for subsurface light transport in translucent materials. The model enables efficient simulation of effects that BRDF models cannot capture, such as color bleeding within materials and diffusion of light across shadow boundaries. The technique is efficient even for anisotropic, highly scattering media that are expensive to simulate using existing methods. The model combines an exact solution for single scattering with a dipole point source diffusion approximation for multiple scattering. We also have designed a new, rapid imagebased measurement technique for determining the optical properties of translucent materials. We validate the model by comparing predicted and measured values and show how the technique can be used to recover the optical properties of a variety of materials, including milk, marble, and skin. Finally, we describe sampling techniques that allow the model to be used within a conventional ray tracer.
A SignalProcessing Framework for Inverse Rendering
 In SIGGRAPH 01
, 2001
"... Realism in computergenerated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining highquality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limit ..."
Abstract

Cited by 186 (18 self)
 Add to MetaCart
Realism in computergenerated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining highquality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signalprocessing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are illposed or numerically illconditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.
An Efficient Representation for Irradiance Environment Maps
, 2001
"... We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to th ..."
Abstract

Cited by 158 (10 self)
 Add to MetaCart
We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowestfrequency modes of the illumination, in order to achieve average errors of only 1%. In other words, the irradiance is insensitive to high frequencies in the lighting, and is well approximated using only 9 parameters. In fact, we show that the irradiance can be procedurally represented simply as a quadratic polynomial in the cartesian components of the surface normal, and give explicit formulae. These observations lead to a simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and imagebased rendering.
Allfrequency shadows using nonlinear wavelet lighting approximation
 ACM Transactions on Graphics
, 2003
"... We present a method, based on precomputed light transport, for realtime rendering of objects under allfrequency, timevarying illumination represented as a highresolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large lowfrequency lights, with ..."
Abstract

Cited by 153 (23 self)
 Add to MetaCart
We present a method, based on precomputed light transport, for realtime rendering of objects under allfrequency, timevarying illumination represented as a highresolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large lowfrequency lights, with very soft shadows. Our main contribution is to approximate the environment map in a wavelet basis, keeping only the largest terms (this is known as a nonlinear approximation). We obtain further compression by encoding the light transport matrix sparsely but accurately in the same basis. Rendering is performed by multiplying a sparse light vector by a sparse transport matrix, which is very fast. For accurate rendering, using nonlinear wavelets is an order of magnitude faster than using linear spherical harmonics, the current best technique.
Polynomial texture maps
 In Computer Graphics, SIGGRAPH 2001 Proceedings
, 2001
"... graphics hardware, illumination, image processing, imagebased rendering, reflectance & shading models, texture mapping In this paper we present a new form of texture mapping that produces increased photorealism. Coefficients of a biquadratic polynomial are stored per texel, and used to reconstruct ..."
Abstract

Cited by 128 (8 self)
 Add to MetaCart
graphics hardware, illumination, image processing, imagebased rendering, reflectance & shading models, texture mapping In this paper we present a new form of texture mapping that produces increased photorealism. Coefficients of a biquadratic polynomial are stored per texel, and used to reconstruct the surface color under varying lighting conditions. Like bump mapping, this allows the perception of surface deformations. However, our method is image based, and photographs of a surface under varying lighting conditions can be used to construct these maps. Unlike bump maps, these Polynomial Texture Maps (PTMs) also capture variations due to surface selfshadowing and interreflections, which enhance realism. Surface colors can be efficiently reconstructed from polynomial coefficients and light directions with minimal fixedpoint hardware. We have also found PTMs useful for producing a number of other effects such as anisotropic and Fresnel shading models and variable depth of focus. Lastly, we present several reflectance function transformations that act as contrast enhancement operators. We have found these particularly useful in the study of ancient archeological clay and stone writings.
ImageBased Reconstruction of Spatial Appearance and Geometric Detail
 ACM Transactions on Graphics
, 2003
"... ÓÙÖ Ñ��×ÙÖ� � �Ê�� × � ��� � ÕÙ�Ð�ØÝ ÑÓ��Ð Ó � � Ö��Ð Ó�� � Ø �Ò � � ��Ò�Ö�Ø� � Û�Ø � Ö�Ð�Ø�Ú�ÐÝ ..."
Abstract

Cited by 109 (22 self)
 Add to MetaCart
ÓÙÖ Ñ��×ÙÖ� � �Ê�� × � ��� � ÕÙ�Ð�ØÝ ÑÓ��Ð Ó � � Ö��Ð Ó�� � Ø �Ò � � ��Ò�Ö�Ø� � Û�Ø � Ö�Ð�Ø�Ú�ÐÝ
Fast separation of direct and global components of a scene using high frequency illumination
 ACM Trans. Graph
, 2006
"... We present fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source. In theory, the separation can be done with just two images taken with a high frequency binary illumination pattern and its complement. In practice, ..."
Abstract

Cited by 85 (15 self)
 Add to MetaCart
We present fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source. In theory, the separation can be done with just two images taken with a high frequency binary illumination pattern and its complement. In practice, a larger number of images are used to overcome the optical and resolution limitations of the camera and the source. The approach does not require the material properties of objects and media in the scene to be known. However, we require that the illumination frequency is high enough to adequately sample the global components received by scene points. We present separation results for scenes that include complex interreflections, subsurface scattering and volumetric scattering. Several variants of the separation approach are also described. When a sinusoidal illumination pattern is used with different phase shifts, the separation can be done using just three images. When the computed images are of lower resolution than the source and the camera, smoothness constraints are used to perform the separation using a single image. Finally, in the case of a static scene that is lit by a simple point source, such as the sun, a moving occluder and a video camera can be used to do the separation. We also show several simple examples of how novel images of a scene can be computed from the separation results.
ImageBased Reconstruction of Spatially Varying Materials
 In Proceedings of the 12th Eurographics Workshop on Rendering
, 2001
"... . The measurement of accurate material properties is an important step towards photorealistic rendering. Many realworld objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properti ..."
Abstract

Cited by 79 (12 self)
 Add to MetaCart
. The measurement of accurate material properties is an important step towards photorealistic rendering. Many realworld objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an imagebased measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object. 1