Results 1  10
of
38
Discrete Elastic Rods
"... We present a discrete treatment of adapted framed curves, parallel transport, and holonomy, thus establishing the language for a discrete geometric model of thin flexible rods with arbitrary cross section and undeformed configuration. Our approach differs from existing simulation techniques in the g ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
We present a discrete treatment of adapted framed curves, parallel transport, and holonomy, thus establishing the language for a discrete geometric model of thin flexible rods with arbitrary cross section and undeformed configuration. Our approach differs from existing simulation techniques in the graphics and mechanics literature both in the kinematic description—we represent the material frame by its angulardeviationfromthenaturalBishopframe — as well as in the dynamical treatment—we treat the centerline as dynamic and the material frame as quasistatic. Additionally, we describe a manifold projection method for coupling rods to rigidbodiesandsimultaneouslyenforcingrod in extensibility. Theuse of quasi statics and constraints provides an efficient treatment for stiff twisting and stretching modes; at the same time, we retain the dynamic bending of the centerline and accurately reproduce the coupling between bending and twisting modes. We validate the discrete rod model via quantitative buckling, stability, and coupledmode experiments, and via qualitative knottying comparisons.
Realtime BRDF editing in complex lighting
 ACM TOG (PROC. OF SIGGRAPH)
, 2006
"... Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes a realtime rendering system that enables interactive edits of BRDFs, as rendered in their final placement o ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
Current systems for editing BRDFs typically allow users to adjust analytic parameters while visualizing the results in a simplified setting (e.g. unshadowed point light). This paper describes a realtime rendering system that enables interactive edits of BRDFs, as rendered in their final placement on objects in a static scene, lit by direct, complex illumination. Allfrequency effects (ranging from nearmirror reflections and hard shadows to diffuse shading and soft shadows) are rendered using a precomputationbased approach. Inspired by realtime relighting methods, we create a linear system that fixes lighting and view to allow realtime BRDF manipulation. In order to linearize the image’s response to BRDF parameters, we develop an intermediate curvebased representation, which also reduces the rendering and precomputation operations to 1D while maintaining accuracy for a very general class of BRDFs. Our system can be used to edit complex analytic BRDFs (including anisotropic models), as well as measured reflectance data. We improve on the standard precomputed radiance transfer (PRT) rendering computation by introducing an incremental rendering algorithm that takes advantage of frametoframe coherence. We show that it is possible to render referencequality images while only updating 10 % of the data at each frame, sustaining framerates of 2530fps.
Inverse shade trees for nonparametric material representation and editing
 ACM Trans. Graph
, 2006
"... Recent progress in the measurement of surface reflectance has created a demand for nonparametric appearance representations that are accurate, compact, and easy to use for rendering. Another crucial goal, which has so far received little attention, is editability: for practical use, we must be able ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Recent progress in the measurement of surface reflectance has created a demand for nonparametric appearance representations that are accurate, compact, and easy to use for rendering. Another crucial goal, which has so far received little attention, is editability: for practical use, we must be able to change both the directional and spatial behavior of surface reflectance (e.g., making one material shinier, another more anisotropic, and changing the spatial “texture maps ” indicating where each material appears). We introduce an Inverse Shade Tree framework that provides a general approach to estimating the “leaves ” of a userspecified shade tree from highdimensional measured datasets of appearance. These leaves are sampled 1 and 2dimensional functions that capture both the directional behavior of individual materials and their spatial mixing patterns. In order to compute these shade trees automatically, we map the problem to matrix factorization and introduce a flexible new algorithm that allows for constraints such as nonnegativity, sparsity, and energy conservation. Although we cannot infer every type of shade tree, we demonstrate the ability to reduce multigigabyte measured datasets of the SpatiallyVarying Bidirectional Reflectance Distribution Function (SVBRDF) into a compact representation that may be edited in real time.
GRINSPUN E.: Frequency domain normal map filtering
 Trans. on Graphics, Siggraph’07
"... Filtering is critical for representing detail, such as color textures or normal maps, across a variety of scales. While MIPmapping texture maps is commonplace, accurate normal map filtering remains a challenging problem because of nonlinearities in shading—we cannot simply average nearby surface no ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
Filtering is critical for representing detail, such as color textures or normal maps, across a variety of scales. While MIPmapping texture maps is commonplace, accurate normal map filtering remains a challenging problem because of nonlinearities in shading—we cannot simply average nearby surface normals. In this paper, we show analytically that normal map filtering can be formalized as a spherical convolution of the normal distribution function (NDF) and the BRDF, for a large class of common BRDFs such as Lambertian, microfacet and factored measurements. This theoretical result explains many previous filtering techniques as special cases, and leads to a generalization to a broader class of measured and analytic BRDFs. Our practical algorithms leverage a significant body of work that has studied lightingBRDF convolution. We show how spherical harmonics can be used to filter the NDF for Lambertian and lowfrequency specular BRDFs, while spherical von MisesFisher distributions can be used for highfrequency materials. 1
A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination
 CONDITIONALLY ACCEPTED TO ACM TRANSACTIONS ON GRAPHICS (20072008)
"... The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomia ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomial representation that enables interactive BRDF editing with global illumination. Unlike previous precomputationbased rendering techniques, the image is not linear in the BRDF when considering interreflections. We introduce a framework for precomputing a multibounce tensor of polynomial coefficients, that encapsulates the nonlinear nature of the task. Significant reductions in complexity are achieved by leveraging the lowfrequency nature of indirect light. We use a highquality representation for the BRDFs at the first bounce from the eye, and lowerfrequency (often diffuse) versions for further bounces. This approximation correctly captures the general global illumination in a scene, including colorbleeding, nearfield object reflections, and even caustics. We adapt Monte Carlo path tracing for precomputing the tensor of coefficients for BRDF basis functions. At runtime, the highdimensional tensors can be reduced to a simple dot product at each pixel for rendering. We present a number of examples of editing BRDFs in complex scenes, with interactive feedback rendered with global illumination.
Exploiting Temporal Coherence for Incremental
"... Precomputed radiance transfer (PRT) enables allfrequency relighting with complex illumination, materials and shadows. To achieve realtime performance, PRT exploits angular coherence in the illumination, and spatial coherence in the light transport. Temporal coherence of the lighting from frame to ..."
Abstract
 Add to MetaCart
Precomputed radiance transfer (PRT) enables allfrequency relighting with complex illumination, materials and shadows. To achieve realtime performance, PRT exploits angular coherence in the illumination, and spatial coherence in the light transport. Temporal coherence of the lighting from frame to frame is an important, but unexplored additional form of coherence for PRT. In this paper, we develop incremental methods for approximating the differences in lighting between consecutive frames. We analyze the lighting wavelet decomposition over typical motion sequences, and observe differing degrees of temporal coherence across levels of the wavelet hierarchy. To address this, our algorithm treats each level separately, adapting to available coherence. The proposed method is orthogonal to other forms of coherence, and can be added to almost any allfrequency PRT algorithm with minimal implementation, computation or memory overhead. We demonstrate our technique within existing codes for nonlinear wavelet approximation, changing view with BRDF factorization, and clustered PCA. Exploiting temporal coherence of dynamic lighting yields a 3×–4 × performance improvement, e.g., allfrequency effects are achieved with 30 wavelet coefficients per frame for the lighting, about the same as lowfrequency spherical harmonic methods. Distinctly, our algorithm smoothly converges to the exact result within a few frames of the lighting becoming static.
AGRAWALA M.: Efficient shadows from sampled environment maps
 Journal of Graphics Tools
"... This paper addresses the problem of efficiently calculating shadows from environment maps. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction, such as by raytracing. We sh ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
This paper addresses the problem of efficiently calculating shadows from environment maps. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction, such as by raytracing. We show that coherence in both spatial and angular domains can be used to reduce the number of shadow rays that need to be traced. Specifically, we use a coarsetofine evaluation of the image, predicting visibility by reusing visibility calculations from four nearby pixels that have already been evaluated. This simple method allows us to explicitly mark regions of uncertainty in the prediction. By only tracing rays in these and neighboring directions, we are able to reduce the number of shadow rays traced by up to a factor of 20 while maintaining error rates below 0.01%. For many scenes, our algorithm can add shadowing from hundreds of lights at twice the cost of rendering without shadows. approximately equal quality approximately equal work 60 million shadow rays 6 million shadow rays 7 million shadow rays standard raytracing CoherenceBased Sampling standard raytracing Figure 1. A scene illuminated by a sampled environment map. The left image is rendered in POVRay using shadowray tracing to determine lightsource visibility for the 400 lights in the scene, as sampled from the environment according to [ARBJ03]. The center image uses our CoherenceBased Sampling to render the same scene with a 90 % reduction in shadow rays traced. The right image is again traced in POVRay, but with a reduced sampling of the environment map (50 lights, again using [ARBJ03]) to approximate the number of shadowrays traced using our method. Note that the lower sampling of the environment map in the right image does not faithfully reproduce the soft shadows. 1.
Efficient Shadows for Sampled Environment Maps
"... This paper addresses the problem of efficiently calculating shadows from environment maps in the context of raytracing. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper addresses the problem of efficiently calculating shadows from environment maps in the context of raytracing. Since accurate rendering of shadows from environment maps requires hundreds of lights, the expensive computation is determining visibility from each pixel to each light direction. We show that coherence in both spatial and angular domains can be used to reduce the number of shadowrays that need to be traced. Specifically, we use a coarsetofine evaluation of the image, predicting visibility by reusing visibility calculations from 4 nearby pixels that have already been evaluated. This simple method allows us to explicitly mark regions of uncertainty in the prediction. By only tracing rays in these and neighboring directions, we are able to reduce the number of shadowrays traced by up to a factor of 20 while maintaining error rates below 0.01%. For many scenes, our algorithm can add shadowing from hundreds of lights at only twice the cost of rendering without shadows. Sample source code is available online. 60 million shadow rays standard raytracing approximately equal quality 6 million shadow rays our method approximately equal work 7 million shadow rays standard raytracing Figure 1: A scene illuminated by a sampled environment map. The left image is rendered in POVRay using shadowray tracing to determine lightsource visibility for the 400 lights in the scene, as sampled from the environment according to [Agarwal et al. 2003]. The center image uses our CoherenceBased Sampling to render the same scene with a 90 % reduction in shadowrays traced. The right image is again traced in POV Ray, but with a reduced sampling of the environment map (50 lights, again using [Agarwal et al. 2003]) to approximate the number of shadowrays traced using our method. Note that the lower sampling of the environment map in the right image does not faithfully reproduce the soft shadows. 1
Exploiting Temporal Coherence for Precomputation Based Rendering
, 2006
"... Precomputed radiance transfer (PRT) generates impressive images with complex illumination, materials and shadows with realtime interactivity. These methods separate the scene’s static and dynamic components allowing the static portion to be computed as a preprocess. In this work, we hold geometry ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Precomputed radiance transfer (PRT) generates impressive images with complex illumination, materials and shadows with realtime interactivity. These methods separate the scene’s static and dynamic components allowing the static portion to be computed as a preprocess. In this work, we hold geometry static and allow either the lighting or BRDF to be dynamic. To achieve realtime performance, both static and dynamic components are compressed by exploiting spatial and angular coherence. Temporal coherence of the dynamic component from frame to frame is an important, but unexplored additional form of coherence. In this thesis, we explore temporal coherence of two forms of allfrequency PRT: BRDF material editing and lighting design. We develop incremental methods for approximating the differences in the dynamic component between consecutive frames. For BRDF editing, we find that a pure incremental approach allows quick convergence to an exact solution with smooth realtime response. For relighting, we observe vastly differing degrees of temporal coherence accross levels of the lighting’s wavelet hierarchy. To address this, we develop an algorithm that treats each level separately, adapting to available coherence. The proposed methods are othogonal to
Reflectance Sharing: Imagebased Rendering from a Sparse Set of Images
"... When the shape of an object is known, its appearance is determined by the spatiallyvarying reflectance function defined on its surface. Imagebased rendering methods that use geometry seek to estimate this function from image data. Most existing methods recover a unique angular reflectance function ..."
Abstract
 Add to MetaCart
When the shape of an object is known, its appearance is determined by the spatiallyvarying reflectance function defined on its surface. Imagebased rendering methods that use geometry seek to estimate this function from image data. Most existing methods recover a unique angular reflectance function (e.g., BRDF) at each surface point and provide reflectance estimates with high spatial resolution. Their angular accuracy is limited by the number of available images, and as a result, most of these methods focus on capturing parametric or lowfrequency angular reflectance effects, or allowing only one of lighting or viewpoint variation. We present an alternative approach that enables an increase in the angular accuracy of a spatiallyvarying reflectance function in exchange for a decrease in spatial resolution. By framing the problem as scattereddata interpolation in a mixed spatial and angular domain, reflectance information is shared across the surface, exploiting the high spatial resolution that images provide to fill the holes between sparsely observed view and lighting directions. Since the BRDF typically varies slowly from point to point over much of an object’s surface, this method enables imagebased rendering from a sparse set of images without assuming a parametric reflectance model. In fact, the method can even be applied in the limiting case of a single input image.
Results 1  10
of
38