Results 1 
7 of
7
Combining edges and points for interactive highquality rendering
 ACM Trans. Graph
, 2003
"... This paper presents a new interactive rendering and display technique for complex scenes with expensive shading, such as global illumination. Our approach combines sparsely sampled shading (points) and analytically computed discontinuities (edges) to interactively generate highquality images. The e ..."
Abstract

Cited by 71 (7 self)
 Add to MetaCart
This paper presents a new interactive rendering and display technique for complex scenes with expensive shading, such as global illumination. Our approach combines sparsely sampled shading (points) and analytically computed discontinuities (edges) to interactively generate highquality images. The edgeandpoint image is a new compact representation that combines edges and points such that fast, tabledriven interpolation of pixel shading from nearby point samples is possible, while respecting discontinuities. The edgeandpoint renderer is extensible, permitting the use of arbitrary shaders to collect shading samples. Shading discontinuities, such as silhouettes and shadow edges, are found at interactive rates. Our software implementation supports interactive navigation and object manipulation in scenes that include expensive lighting effects (such as global illumination) and geometrically complex objects. For interactive rendering we show that highquality images of these scenes can be rendered at 8–14 frames per second on a desktop PC: a speedup of 20–60 over a ray tracer computing a single sample per pixel.
Delay Streams for Graphics Hardware
, 2003
"... In causal processes decisions do not depend on future data. Many wellknown problems, such as occlusion culling, orderindependent transparency and edge antialiasing cannot be properly solved using the traditional causal rendering architectures, because future data may change the interpretation of ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
In causal processes decisions do not depend on future data. Many wellknown problems, such as occlusion culling, orderindependent transparency and edge antialiasing cannot be properly solved using the traditional causal rendering architectures, because future data may change the interpretation of current events.
ModelBased 3D Hand Pose Estimation from Monocular Video
"... A novel modelbased approach to 3D hand tracking from monocular video is presented. The 3D hand pose, the hand texture and the illuminant are dynamically estimated through minimization of an objective function. Derived from an inverse problem formulation, the objective function enables explicit use ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
A novel modelbased approach to 3D hand tracking from monocular video is presented. The 3D hand pose, the hand texture and the illuminant are dynamically estimated through minimization of an objective function. Derived from an inverse problem formulation, the objective function enables explicit use of temporal texture continuity and shading information, while handling important selfocclusions and timevarying illumination. The minimization is done efficiently using a quasiNewton method, for which we provide a rigorous derivation of the objective function gradient. Particular attention is given to terms related to the change of visibility near selfocclusion boundaries that are neglected in existing formulations. To this end we introduce new occlusion forces and show that using all gradient terms greatly improves the performance of the method. Qualitative and quantitative experimental results demonstrate the potential of the approach.
Graphics Hardware (2004) T. AkenineMöller, M. McCool (Editors) A Hierarchical Shadow Volume Algorithm
"... a shadow boundary (green/light gray), and an image showing accurately processed boundary tiles (darker gray) and data copied from the lowresolution shadows (lighter gray). If a tile contains a shadow boundary (3rd image), the corresponding lowresolution shadow data is more or less random. This is ..."
Abstract
 Add to MetaCart
(Show Context)
a shadow boundary (green/light gray), and an image showing accurately processed boundary tiles (darker gray) and data copied from the lowresolution shadows (lighter gray). If a tile contains a shadow boundary (3rd image), the corresponding lowresolution shadow data is more or less random. This is corrected by applying perpixel rasterization to boundary tiles. The shadow volume algorithm is a popular technique for realtime shadow generation using graphics hardware. Its major disadvantage is that it is inherently fillratelimited, as the performance is inversely proportional to the area of the projected shadow volumes. We present a new algorithm that reduces the shadow volume rasterization work significantly. With our algorithm, the amount of perpixel processing becomes proportional to the screenspace length of the visible shadow boundary instead of the projected area. The first stage of the algorithm finds 8 × 8 pixel tiles, whose 3D bounding boxes are either completely inside or outside the shadow volume. After that, the second stage performs perpixel computations only for the potential shadow boundary tiles. We outline a twopass implementation, and also describe an efficient singlepass hardware architecture, in which the two stages are separated using a delay stream. The only modification required in applications is a new pair of calls for marking the beginning and end of a shadow volume. In our test scenes, the algorithm processes up to 11.5 times fewer pixels compared to current stateoftheart methods, while reducing the external video memory bandwidth by a factor of up to 17.1. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism—Shadowing I.3.1 [Computer Graphics]: Hardware Architecture—Graphics Processors
Interactive Visualization of . . .
, 2007
"... Computational simulations frequently generate solutions defined over very large tetrahedral volume meshes containing many millions of elements. Furthermore, solutions over these meshes may often be expressed using nonlinear basis functions. Certain solution techniques, such as discontinuous Galerki ..."
Abstract
 Add to MetaCart
Computational simulations frequently generate solutions defined over very large tetrahedral volume meshes containing many millions of elements. Furthermore, solutions over these meshes may often be expressed using nonlinear basis functions. Certain solution techniques, such as discontinuous Galerkin finite element methods, may even produce nonconforming meshes. Such data is difficult to visualize interactively, as it is far too large to fit in memory and many common data reduction techniques, such as mesh simplification, cannot be applied to nonconforming meshes. Common linear interpolation method cannot faithfully and accurately evaluate the nonlinear solutions. To provide accurate visualization, in the first part of this dissertation, we introduce a method for pixelexact evaluation of higher order solution data on the GPU. We demonstrate the importance of perpixel rendering versus simple linear interpolation for producing high quality visualizations. We also show that our system can accommodate reasonably large datasets—spacetime meshes containing up to 20 million tetrahedra. To provide interactive visualization, in the second part, we introduce a pointbased visualization system for interactive rendering of large, potentially nonconforming, tetrahedral meshes. We propose methods for adaptively sampling points from nonlinear solution data and for decimating points at run time to fit GPU memory limits. Because these are streaming processes, memory consumption is independent of the input size. We also present an orderindependent point rendering method that can efficiently render volumes on the order of 20 million tetrahedra at interactive rates.