Results 1  10
of
114
Triple Product Wavelet Integrals for AllFrequency Relighting
, 2004
"... This paper focuses on efficient rendering based on precomputed light transport, with realistic materials and shadows under allfrequency direct lighting such as environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface po ..."
Abstract

Cited by 108 (9 self)
 Add to MetaCart
This paper focuses on efficient rendering based on precomputed light transport, with realistic materials and shadows under allfrequency direct lighting such as environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While imagebased and synthetic methods for realtime rendering have been proposed, they do not scale to high sampling rates with variation of both lighting and viewpoint. Current approaches are therefore limited to lower dimensionality (only lighting or viewpoint variation, not both) or lower sampling rates (low frequency lighting and materials) . We propose a new mathematical and computational analysis of precomputed light transport. We use factored forms, separately precomputing and representing visibility and material properties. Rendering then requires computing triple product integrals at each vertex, involving the lighting, visibility and BRDF. Our main contribution is a general analysis of these triple product integrals, which are likely to have broad applicability in computer graphics and numerical analysis. We first determine the computational complexity in a number of bases like point samples, spherical harmonics and wavelets. We then give efficient linear and sublineartime algorithms for Haar wavelets, incorporating nonlinear wavelet approximation of lighting and BRDFs. Practically, we demonstrate rendering of images under new lighting and viewing conditions in a few seconds, significantly faster than previous techniques.
A Frequency Analysis of Light Transport
, 2005
"... We present a signalprocessing framework for light transport. We study the frequency content of radiance and how it is altered by phenomena such as shading, occlusion, and transport. This extends previous work that considered either spatial or angular dimensions, and it offers a comprehensive treatm ..."
Abstract

Cited by 106 (19 self)
 Add to MetaCart
We present a signalprocessing framework for light transport. We study the frequency content of radiance and how it is altered by phenomena such as shading, occlusion, and transport. This extends previous work that considered either spatial or angular dimensions, and it offers a comprehensive treatment of both space and angle. We show that occlusion, a multiplication in the primal, amounts in the Fourier domain to a convolution by the spectrum of the blocker. Propagation corresponds to a shear in the spaceangle frequency domain, while reflection on curved objects performs a different shear along the angular frequency axis. As shown by previous work, reflection is a convolution in the primal and therefore a multiplication in the Fourier domain. Our work shows how the spatial components of lighting are affected by this angular convolution. Our framework predicts the characteristics of interactions such as caustics and the disappearance of the shadows of small features. Predictions on the frequency content can then be used to control sampling rates for rendering. Other potential applications include precomputed radiance transfer and inverse rendering.
Structured Importance Sampling of Environment Maps
, 2003
"... We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map. Our method handles occlusion, highfrequency lighting, and is significantly faster than alternative methods based on Monte Carlo samp ..."
Abstract

Cited by 99 (9 self)
 Add to MetaCart
We introduce structured importance sampling, a new technique for efficiently rendering scenes illuminated by distant natural illumination given in an environment map. Our method handles occlusion, highfrequency lighting, and is significantly faster than alternative methods based on Monte Carlo sampling. We achieve this speedup as a result of several ideas. First, we present a new metric for stratifying and sampling an environment map taking into account both the illumination intensity as well as the expected variance due to occlusion within the scene. We then present a novel hierarchical stratification algorithm that uses our metric to automatically stratify the environment map into regular strata. This approach enables a number of rendering optimizations, such as preintegrating the illumination within each stratum to eliminate noise at the cost of adding bias, and sorting the strata to reduce the number of sample rays. We have rendered several scenes illuminated by natural lighting, and our results indicate that structured importance sampling is better than the best previous Monte Carlo techniques, requiring one to two orders of magnitude fewer samples for the same image quality.
ExampleBased Photometric Stereo: Shape Reconstruction with General . . .
, 2005
"... This paper presents a technique for computing the geometry of objects with general reflectance properties from images. For surfaces ..."
Abstract

Cited by 82 (2 self)
 Add to MetaCart
This paper presents a technique for computing the geometry of objects with general reflectance properties from images. For surfaces
Shape and materials by example: A photometric stereo approach
 In Proceedings IEEE CVPR 2003
, 2003
"... This paper presents a technique for computing the geometry of objects with general reflectance properties from images. For surfaces with varying material properties, a full segmentation into different material types is also computed. It is assumed that the camera viewpoint is fixed, but the illum ..."
Abstract

Cited by 77 (3 self)
 Add to MetaCart
(Show Context)
This paper presents a technique for computing the geometry of objects with general reflectance properties from images. For surfaces with varying material properties, a full segmentation into different material types is also computed. It is assumed that the camera viewpoint is fixed, but the illumination varies over the input sequence. It is also assumed that one or more example objects with similar materials and known geometry are imaged under the same illumination conditions. Unlike most previous work in shape reconstruction, this technique can handle objects with arbitrary and spatiallyvarying BRDFs. Furthermore, the approach works for arbitrary distant and unknown lighting environments. Finally, almost no calibration is needed, making the approach exceptionally simple to apply. 1
Efficient Illumination by High Dynamic Range Images
, 2003
"... We present an algorithm for determining quadrature rules for computing the direct illumination of predominantly diffuse objects by high dynamic range images. The new method precisely reproduces fine shadow detail, is much more efficient as compared to Monte Carlo integration, and does not require an ..."
Abstract

Cited by 63 (0 self)
 Add to MetaCart
We present an algorithm for determining quadrature rules for computing the direct illumination of predominantly diffuse objects by high dynamic range images. The new method precisely reproduces fine shadow detail, is much more efficient as compared to Monte Carlo integration, and does not require any manual intervention.
A practical analytic single scattering model for real time rendering
 ACM Trans. Graph
, 2005
"... We consider realtime rendering of scenes in participating media, capturing the effects of light scattering in fog, mist and haze. While a number of sophisticated approaches based on Monte Carlo and finite element simulation have been developed, those methods do not work at interactive rates. The mo ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
We consider realtime rendering of scenes in participating media, capturing the effects of light scattering in fog, mist and haze. While a number of sophisticated approaches based on Monte Carlo and finite element simulation have been developed, those methods do not work at interactive rates. The most common realtime methods are essentially simple variants of the OpenGL fog model. While easy to use and specify, that model excludes many important qualitative effects like glows around light sources, the impact of volumetric scattering on the appearance of surfaces such as the diffusing of glossy highlights, and the appearance under complex lighting such as environment maps. In this paper, we present an alternative physically based approach that captures these effects while maintaining realtime performance and the easeofuse of the OpenGL fog model. Our method is based on an explicit analytic integration of the single scattering light transport equations for an isotropic point light source in a homogeneous participating medium. We can implement the model in modern programmable graphics hardware using a few small numerical lookup tables stored as texture maps. Our model can also be easily adapted to generate the appearances of materials with arbitrary BRDFs, environment map lighting, and precomputed radiance transfer methods, in the presence of participating media. Hence, our techniques can be widely used in realtime rendering. 1
Realtime BRDF editing in complex lighting
 ACM TOG (PROC. OF SIGGRAPH)
, 2006
"... Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes a realtime rendering system that enables interactive edits of BRDFs, as rendered in their final placement o ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes a realtime rendering system that enables interactive edits of BRDFs, as rendered in their final placement on objects in a static scene, lit by direct, complex illumination. Allfrequency effects (ranging from nearmirror reflections and hard shadows to diffuse shading and soft shadows) are rendered using a precomputationbased approach. Inspired by realtime relighting methods, we create a linear system that fixes lighting and view to allow realtime BRDF manipulation. In order to linearize the image’s response to BRDF parameters, we develop an intermediate curvebased representation, which also reduces the rendering and precomputation operations to 1D while maintaining accuracy for a very general class of BRDFs. Our system can be used to edit complex analytic BRDFs (including anisotropic models), as well as measured reflectance data. We improve on the standard precomputed radiance transfer (PRT) rendering computation by introducing an incremental rendering algorithm that takes advantage of frametoframe coherence. We show that it is possible to render referencequality images while only updating 10 % of the data at each frame, sustaining framerates of 2530fps.
A SignalProcessing Framework for Reflection
 ACM TRANSACTIONS ON GRAPHICS
, 2004
"... ... In this paper, we formalize these notions, showing that the reflected light field can be thought of in a precise quantitative way as obtained by convolving the lighting and BRDF, i.e. by filtering the incident illumination using the BRDF. Mathematically, we are able to express the frequencyspac ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
... In this paper, we formalize these notions, showing that the reflected light field can be thought of in a precise quantitative way as obtained by convolving the lighting and BRDF, i.e. by filtering the incident illumination using the BRDF. Mathematically, we are able to express the frequencyspace coe#cients of the reflected light field as a product of the spherical harmonic coe# cients of the illumination and the BRDF. These results are of practical importance in determining the wellposedness and conditioning of problems in inverse renderingestimation of BRDF and lighting parameters from real photographs. Furthermore, we are able to derive analytic formulae for the spherical harmonic coe#cients of many common BRDF and lighting models. From this formal analysis, we are able to determine precise conditions under which estimation of BRDFs and lighting distributions are well posed and wellconditioned. Our mathematical analysis also has implications for forward renderingespecially the e#cient rendering of objects under complex lighting conditions specified by environment maps. The results, especially the analytic formulae derived for Lambertian surfaces, are also relevant in computer vision in the areas of recognition, photometric stereo and structure from motion.
Relighting human locomotion with flowed reflectance fields
 In SIGGRAPH ’06: ACM SIGGRAPH 2006 Sketches
, 2006
"... Overview We present an imagebased approach for capturing the appearance of a walking or running person so they can be rendered realistically under variable viewpoint and illumination. Considerable work has addressed aspects of postproduction control of viewpoint and illumination of a human performa ..."
Abstract

Cited by 44 (9 self)
 Add to MetaCart
Overview We present an imagebased approach for capturing the appearance of a walking or running person so they can be rendered realistically under variable viewpoint and illumination. Considerable work has addressed aspects of postproduction control of viewpoint and illumination of a human performance. Most proposed systems address only one of those two aspects e.g. [Wilburn et al. 2005], [Wenger et al. 2005]. [Theobalt et al. 2005] addressed control both of the viewpoint and illumination, however the approach is challenge by low sampling of both lighting and view dimensions. We take a step toward an imagebased approach to obtaining postproduction control over both viewpoint and illumination of cyclic fullbody human motion by combining the performance relighting technique of [Wenger et al. 2005] with a novel view generation technique based on a flowed reflectance field. By restricting our consideration to cyclic motion such as walking and running, we are able to acquire a 2D array of views by slowly rotating the subject in front of a 1D vertical array of three high speed cameras and segmenting the data per motion cycle. We then use a combination of light field rendering and view interpolation based on optical flow to render the subject from new viewpoints.