Results 1  10
of
24
Combining edges and points for interactive highquality rendering
 ACM Trans. Graph
, 2003
"... This paper presents a new interactive rendering and display technique for complex scenes with expensive shading, such as global illumination. Our approach combines sparsely sampled shading (points) and analytically computed discontinuities (edges) to interactively generate highquality images. The e ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
This paper presents a new interactive rendering and display technique for complex scenes with expensive shading, such as global illumination. Our approach combines sparsely sampled shading (points) and analytically computed discontinuities (edges) to interactively generate highquality images. The edgeandpoint image is a new compact representation that combines edges and points such that fast, tabledriven interpolation of pixel shading from nearby point samples is possible, while respecting discontinuities. The edgeandpoint renderer is extensible, permitting the use of arbitrary shaders to collect shading samples. Shading discontinuities, such as silhouettes and shadow edges, are found at interactive rates. Our software implementation supports interactive navigation and object manipulation in scenes that include expensive lighting effects (such as global illumination) and geometrically complex objects. For interactive rendering we show that highquality images of these scenes can be rendered at 8–14 frames per second on a desktop PC: a speedup of 20–60 over a ray tracer computing a single sample per pixel.
The expected number of 3D visibility events is linear
 SIAM J. COMPUTING
, 2002
"... In this paper, we show that, amongst n uniformly distributed unit balls in R³ the expected number of maximal nonoccluded line segments tangent to four balls is linear, considerably improving the previously known upper bound. Using our techniques we show a linear bound on the expected size of the vi ..."
Abstract

Cited by 24 (13 self)
 Add to MetaCart
In this paper, we show that, amongst n uniformly distributed unit balls in R³ the expected number of maximal nonoccluded line segments tangent to four balls is linear, considerably improving the previously known upper bound. Using our techniques we show a linear bound on the expected size of the visibility complex, a data structure encoding the visibility information of a scene, providing evidence that the storage requirement for this data structure is not necessarily prohibitive. Our results
Guided visibility sampling
 ACM Trans. Graph
, 2006
"... Figure 1: Visualization of sampling strategies (white pixels show a subset of the actual samples, missed geometry is marked red). Left: An urban input scene and a view cell (in yellow) for visibility sampling. Middle: Previous visibility sampling algorithms repeatedly sample the same triangles in th ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Figure 1: Visualization of sampling strategies (white pixels show a subset of the actual samples, missed geometry is marked red). Left: An urban input scene and a view cell (in yellow) for visibility sampling. Middle: Previous visibility sampling algorithms repeatedly sample the same triangles in the foreground while missing many smaller triangles and distant geometry. Right: Our solution is guided by scene visibility and therefore quickly finds most visible triangles while requiring drastically fewer samples than previous methods. This paper addresses the problem of computing the triangles visible from a region in space. The proposed aggressive visibility solution is based on stochastic ray shooting and can take any triangular model as input. We do not rely on connectivity information, volumetric occluders, or the availability of large occluders, and can therefore process any given input scene. The proposed algorithm is practically memoryless, thereby alleviating the large memory consumption problems prevalent in several previous algorithms. The strategy of our algorithm is to use ray mutations in ray space to cast rays that are likely to sample new triangles. Our algorithm improves the sampling efficiency of previous work by over two orders of magnitude.
On the WorstCase Complexity of the Silhouette of a Polytope
 In 15th Canadian Conference on Computational Geometry  CCCG’03
, 2003
"... We give conditions under which the worstcase size of the silhouette of a polytope is sublinear. We provide examples with linear size silhouette if any of these conditions is relaxed. Our bounds are the rst nontrivial bounds for the worstcase complexity of silhouettes. 1 ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We give conditions under which the worstcase size of the silhouette of a polytope is sublinear. We provide examples with linear size silhouette if any of these conditions is relaxed. Our bounds are the rst nontrivial bounds for the worstcase complexity of silhouettes. 1
Technical Strategies for Massive Model Visualization
"... Interactive visualization of massive models still remains a challenging problem. This is mainly due to a combination of ever increasing model complexity with the current hardware design trend that leads to a widening gap between slow data access speed and fast data processing speed. We argue that de ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Interactive visualization of massive models still remains a challenging problem. This is mainly due to a combination of ever increasing model complexity with the current hardware design trend that leads to a widening gap between slow data access speed and fast data processing speed. We argue that developing efficient data access and data management techniques is key in solving the problem of interactive visualization of massive models. Particularly, we discuss visibility culling, simplification, cachecoherent layouts, and data compression techniques as efficient data management techniques that enable interactive visualization of massive models.
The stability of the apparent contour of an orientable 2manifold
, 2009
"... Abstract. The (apparent) contour of a smooth mapping from a 2manifold to the plane, f: M → R 2, is the set of critical values, that is, the image of the points at which the gradients of the two component functions are linearly dependent. Assuming M is compact and orientable and measuring difference ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. The (apparent) contour of a smooth mapping from a 2manifold to the plane, f: M → R 2, is the set of critical values, that is, the image of the points at which the gradients of the two component functions are linearly dependent. Assuming M is compact and orientable and measuring difference with the erosion distance, we prove that the contour is stable. 1
An Upper Bound on the Average Size of Silhouettes
 in "22nd ACM Symposium on Computational Geometry 2006
, 2006
"... It is a widely observed phenomenon in computer graphics that the size of the silhouette of a polyhedron is much smaller than the size of the whole polyhedron. This paper provides, for the first time, theoretical evidence supporting this for a large class of objects, namely for polyhedra that approxi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
It is a widely observed phenomenon in computer graphics that the size of the silhouette of a polyhedron is much smaller than the size of the whole polyhedron. This paper provides, for the first time, theoretical evidence supporting this for a large class of objects, namely for polyhedra that approximate surfaces in some reasonable way; the surfaces may be nonconvex and nondifferentiable and they may have boundaries. We prove that such polyhedra have silhouettes of expected size O ( √ n) where the average is taken over all points of view and n is the complexity of the polyhedron. 1
Visibility in Computer Graphics
 JOURNAL OF ENVIRONMENTAL PLANNING
, 2003
"... Visibility computation is crucial for computer graphics from its very beginning. The first visibility algorithms in computer graphics aimed to determine visible surfaces in a synthesized image of a 3D scene. Nowadays there are many different visibility algorithms for various visibility problems. We ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Visibility computation is crucial for computer graphics from its very beginning. The first visibility algorithms in computer graphics aimed to determine visible surfaces in a synthesized image of a 3D scene. Nowadays there are many different visibility algorithms for various visibility problems. We propose a new taxonomy of visibility problems that is based on a classification according to the problem domain. We provide a broad overview of visibility problems and algorithms in computer graphics grouped by the proposed taxonomy. The paper surveys visible surface algorithms, visibility culling algorithms, visibility algorithms for shadow computation, global illumination, pointbased and imagebased rendering, and global visibility computations. Finally, we discuss common concepts of visibility algorithm design and several criteria for the classification of visibility algorithms.
On the Size of the 3D Visibility Skeleton: Experimental Results
"... Abstract. The 3D visibility skeleton is a data structure used to encode global visibility information about a set of objects. Previous theoretical results have shown that for k convex polytopes with n edges in total, the worst case size complexity of this data structure is Θ(n 2 k 2) [Brönnimann et ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. The 3D visibility skeleton is a data structure used to encode global visibility information about a set of objects. Previous theoretical results have shown that for k convex polytopes with n edges in total, the worst case size complexity of this data structure is Θ(n 2 k 2) [Brönnimann et al. 07]; whereas for k uniformly distributed unit spheres, the expected size is Θ(k) [Devillers et al. 03]. In this paper, we study the size of the visibility skeleton experimentally. Our results indicate that the size of the 3D visibility skeleton, in our setting, is C k √ nk, where C varies with the scene density but remains small. This is the first experimentally determined asymptotic estimate of the size of the 3D visibility skeleton for reasonably large n and expressed in terms of both n and k. We suggest theoretical explanations for the experimental results we obtained. Our experiments also indicate that the running time of our implementation is O(n 3/2 k logk), while its worstcase running time complexity is O(n 2 k 2 logk). 1
Adaptive Global Visibility Sampling
"... Figure 1: Results of visibility computations after 1 minute of sampling. Visibility errors are marked in red. Left: Traditional perview cell sampling. Middle: Adaptive Global Visibility Sampling. Right: Adaptive Global Visibility Sampling with visibility filter. Observe the severe underestimation o ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Figure 1: Results of visibility computations after 1 minute of sampling. Visibility errors are marked in red. Left: Traditional perview cell sampling. Middle: Adaptive Global Visibility Sampling. Right: Adaptive Global Visibility Sampling with visibility filter. Observe the severe underestimation of visibility in the left image. The visibility computed by our method in the middle produces significantly less visible artifacts. To the right, our method with a visibility filter applied is practically artifactfree. Note that during this minute, the potentially visible sets for all 8,192 view cells in this example model have been generated. In this paper we propose a global visibility algorithm which computes fromregion visibility for all view cells simultaneously in a progressive manner. We cast rays to sample visibility interactions and use the information carried by a ray for all view cells it intersects. The main contribution of the paper is a set of adaptive sampling strategies based on ray mutations that exploit the spatial coherence of visibility. Our method achieves more than an order of magnitude speedup compared to perview cell sampling. This provides a practical solution to visibility preprocessing and also enables a new type of interactive visibility analysis application, where it is possible to quickly inspect and modify a coarse global visibility solution that is constantly refined.