Results 1  10
of
21
Precomputed Radiance Transfer for RealTime Rendering in Dynamic, LowFrequency Lighting Environments
 ACM Transactions on Graphics
, 2002
"... We present a new, realtime method for rendering diffuse and glossy objects in lowfrequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of ..."
Abstract

Cited by 358 (23 self)
 Add to MetaCart
We present a new, realtime method for rendering diffuse and glossy objects in lowfrequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of arbitrary, lowfrequency incident lighting into transferred radiance which includes global effects like shadows and interreflections from the object onto itself. At runtime, these transfer functions are applied to actual incident lighting. Dynamic, local lighting is handled by sampling it close to the object every frame; the object can also be rigidly rotated with respect to the lighting and vice versa. Lighting and transfer functions are represented using loworder spherical harmonics. This avoids aliasing and evaluates efficiently on graphics hardware by reducing the shading integral to a dot product of 9 to 25 element vectors for diffuse receivers. Glossy objects are handled using matrices rather than vectors. We further introduce functions for radiance transfer from a dynamic lighting environment through a preprocessed object to neighboring points in space. These allow soft shadows and caustics from rigidly moving objects to be cast onto arbitrary, dynamic receivers. We demonstrate realtime global lighting effects with this approach.
Lambertian Reflectance and Linear Subspaces
, 2000
"... We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wi ..."
Abstract

Cited by 338 (19 self)
 Add to MetaCart
We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a lowdimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. Finally, we show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. Research conducted w...
Acquiring linear subspaces for face recognition under variable lighting
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in ..."
Abstract

Cited by 131 (2 self)
 Add to MetaCart
Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: A large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources, and again PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a lowdimensional linear space, and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.
Clustered principal components for precomputed radiance transfer”, SIGGRAPH
, 2003
"... We compress storage and accelerate performance of precomputed radiance transfer (PRT), which captures the way an object shadows, scatters, and reflects light. PRT records over many surface points a transfer matrix. At runtime, this matrix transforms a vector of spherical harmonic coefficients repre ..."
Abstract

Cited by 104 (4 self)
 Add to MetaCart
We compress storage and accelerate performance of precomputed radiance transfer (PRT), which captures the way an object shadows, scatters, and reflects light. PRT records over many surface points a transfer matrix. At runtime, this matrix transforms a vector of spherical harmonic coefficients representing distant, lowfrequency source lighting into exiting radiance. Perpoint transfer matrices form a highdimensional surface signal that we compress using clustered principal component analysis (CPCA), which partitions many samples into fewer clusters each approximating the signal as an affine subspace. CPCA thus reduces the highdimensional transfer signal to a lowdimensional set of perpoint weights on a percluster set of representative matrices. Rather than computing a weighted sum of representatives and applying this result to the lighting, we apply the representatives to the lighting percluster (on the CPU) and weight these results perpoint (on the GPU). Since the output of the matrix is lowerdimensional than the matrix itself, this reduces computation. We also increase the accuracy of encoded radiance functions with a new leastsquares optimal projection of spherical harmonics onto the hemisphere. We describe an implementation on graphics hardware that performs realtime rendering of glossy objects with dynamic selfshadowing and interreflection without fixing the view or light as in previous work. Our approach also allows significantly increased lighting frequency when rendering diffuse objects and includes subsurface scattering.
Nine Points of Lights: Acquiring Subspaces for Face Recognition under Variable Illumination. CVPR2001
"... Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. Basis images spanning this space are usually obtained in one of two ways: A large number of images of the object u ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. Basis images spanning this space are usually obtained in one of two ways: A large number of images of the object under different conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, a 3D model (perhaps reconstructed from images) is used to render virtual images under either point sources from which a subspace is derived using PCA or more recently under diffuse synthetic lighting based on spherical harmonics. In this paper, we show that there exists a configuration of nine point light source directions such that by taking nine images of each individual under these single sources, the resulting subspace is effective at recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex intermediate steps such as PCA and 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or physically construct complex diffuse (harmonic) light fields. We provide both theoretical and empirical results to explain why these linear spaces should be good for recognition. 1.
Interactive display of isosurfaces with global illumination
 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... In many applications, volumetric data sets are examined by displaying isosurfaces, surfaces where the data, or some function of the data, takes on a given value. Interactive applications typically use local lighting models to render such surfaces. This work introduces a method to precompute or lazil ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
In many applications, volumetric data sets are examined by displaying isosurfaces, surfaces where the data, or some function of the data, takes on a given value. Interactive applications typically use local lighting models to render such surfaces. This work introduces a method to precompute or lazily compute global illumination to improve interactive isosurface renderings. The precomputed illumination resides in a separate volume and includes direct light, shadows, and interreflections. Using this volume, interactive globally illuminated renderings of isosurfaces become feasible while still allowing dynamic manipulation of lighting, viewpoint and isovalue.
Fast multiresolution image operations in the wavelet domain
 In IEEE Transactions on Visualization and Computer Graphics
"... A wide class of operations on images can be performed directly in the wavelet domain by operating on coefficients of the wavelet transforms of the images and other matrices defined by these operations. Operating in the wavelet domain enables one to perform these operations progressively in a coarse ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A wide class of operations on images can be performed directly in the wavelet domain by operating on coefficients of the wavelet transforms of the images and other matrices defined by these operations. Operating in the wavelet domain enables one to perform these operations progressively in a coarsetofine fashion, operate on different resolutions, manipulate features at different scales, trade off accuracy for speed, and and localize the operation in both the spatial and the frequency domains. Performing such operations in the wavelet domain and then reconstructing the result is also often more efficient than performing the same operation in the standard direct fashion. In this paper we demonstrate the applicability and advantages of this framework to three common types of image operations: image blending, 3D warping of images and sequences, and convolution of images and image sequences. Index terms: wavelets, image blending, 3D warping, imagebased rendering, convolution, multiresolution operations, progressive computation. 1
Illumination Modeling and Normalization for Face Recognition
 Proc.IEEE Int. Workshop on Analysis and Modeling of Faces and Gestures
, 2003
"... In this paper, we present a general framework for face modeling under varying lighting conditions. First, we show that a face lighting subspace can be constructed based on three or more training face images illuminated by noncoplanar lights. The lighting of any face image can be represented as a po ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this paper, we present a general framework for face modeling under varying lighting conditions. First, we show that a face lighting subspace can be constructed based on three or more training face images illuminated by noncoplanar lights. The lighting of any face image can be represented as a point in this subspace. Second, we show that the extreme rays, i.e. the boundary of an illumination cone, cover the entire light sphere. Therefore, a relatively sparsely sampled face images can be used to build a face model instead of calculating each extremely illuminated face image. Third, we present a face normalization algorithm, illumination alignment, i.e. changing the lighting of one face image to that of another face image. Experiments are presented. 1.
Stupid Spherical Harmonics (SH) Tricks
"... This paper is a companion to a GDC 2008 Lecture with the same title. It provides a brief overview of spherical harmonics (SH) and discusses several ways they can be used in interactive graphics and problems that might arise. In particular it focuses on the following issues: How to evaluate lighting ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper is a companion to a GDC 2008 Lecture with the same title. It provides a brief overview of spherical harmonics (SH) and discusses several ways they can be used in interactive graphics and problems that might arise. In particular it focuses on the following issues: How to evaluate lighting models efficiently using SH, what “ringing ” is and what you can do about it, efficient evaluation of SH products and where they might be used. The most up to date version is available on the web at
Face Relighting from a Single Image under Arbitrary Unknown Lighting Conditions
"... Abstract—In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent res ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract—In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a lowdimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three lowdimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregionbased framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions. Index Terms—Face synthesis and recognition, Markov random field, 3D spherical harmonic basis morphable model, vision for graphics. Ç 1