Results 1 - 10
of
264
QSplat: A Multiresolution Point Rendering System for Large Meshes
, 2000
"... Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and p ..."
Abstract
-
Cited by 502 (8 self)
- Add to MetaCart
Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples.
Out-of-Core Simplification of Large Polygonal Models
, 2000
"... We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each cluster's representative ve ..."
Abstract
-
Cited by 159 (10 self)
- Add to MetaCart
We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each cluster's representative vertex, which better preserves fine details and results in a low mean geometric error. The use of quadrics instead of the vertex grading approach in [13] has the additional benefits of requiring less disk space and only a single pass over the model rather than two. The resulting linear time algorithm allows simplification of datasets of arbitrary complexity. In order
Geometry clipmaps: terrain rendering using nested regular grids
- In SIGGRAPH ’04: ACM SIGGRAPH 2004 Papers
, 2004
"... Illustration using a coarse geometry clipmap (size n=31) View of the 216,000×93,600 U.S. dataset near Grand Canyon (n=255) Figure 1:Terrains rendered using geometry clipmaps, showing clipmap levels (size n×n) and transition regions (in blue on right). Rendering throughput has reached a level that en ..."
Abstract
-
Cited by 146 (2 self)
- Add to MetaCart
Illustration using a coarse geometry clipmap (size n=31) View of the 216,000×93,600 U.S. dataset near Grand Canyon (n=255) Figure 1:Terrains rendered using geometry clipmaps, showing clipmap levels (size n×n) and transition regions (in blue on right). Rendering throughput has reached a level that enables a novel approach to level-of-detail (LOD) control in terrain rendering. We introduce the geometry clipmap, which caches the terrain in a set of nested regular grids centered about the viewer. The grids are stored as vertex buffers in fast video memory, and are incrementally refilled as the viewpoint moves. This simple framework provides visual continuity, uniform frame rate, complexity throttling, and graceful degradation. Moreover it allows two new exciting real-time functionalities: decompression and synthesis. Our main dataset is a 40GB height map of the United States. A compressed image pyramid reduces the size by a remarkable factor of 100, so that it fits entirely in memory. This compressed data also contributes normal maps for shading. As the viewer approaches the surface, we synthesize grid levels finer than the stored terrain using fractal noise displacement. Decompression, synthesis, and normal-map computations are incremental, thereby allowing interactive flight at 60 frames/sec.
Terrain simplification simplified: a general framework for view-dependent out-of-core visualization.
- IEEE Transactions on Visualization and Computer Graphics,
, 2002
"... ..."
P,Pascucci V. Visualization of Large Terrains Made Easy [C
- In Proceedings of IEEE Visualization 2001
"... We present an elegant and simple to implement framework for per-forming out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the recent trend of increas-ingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have ..."
Abstract
-
Cited by 87 (5 self)
- Add to MetaCart
(Show Context)
We present an elegant and simple to implement framework for per-forming out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the recent trend of increas-ingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to seg-ment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of us-ing several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts. Our second contribution is a new and simple, yet easy to gen-eralize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continu-ous surface with subdivision connectivity. In tandem with the re-finement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient. 1
Streaming Meshes
, 2005
"... Recent years have seen an immense increase in the complexity of geometric data sets. Today's gigabyte-sized polygon models can no longer be completely loaded into the main memory of common desktop PCs. Unfortunately, current mesh formats do not account for this. They were designed years ago whe ..."
Abstract
-
Cited by 86 (18 self)
- Add to MetaCart
Recent years have seen an immense increase in the complexity of geometric data sets. Today's gigabyte-sized polygon models can no longer be completely loaded into the main memory of common desktop PCs. Unfortunately, current mesh formats do not account for this. They were designed years ago when meshes were orders of magnitudes smaller. Using such formats to store large meshes is inefficient and unduly complicates all subsequent processing.
Adaptive Nonlinear Finite Elements for Deformable Body Simulation Using Dynamic Progressive Meshes
- Computer Graphics Forum
, 2001
"... Realistic behavior of deformable objects is essential for many applications such as simulation for surgical training. Existing techniques of deformable modeling for real time simulation have either used approximate methods that are not physically accurate or linear methods that do not produce reas ..."
Abstract
-
Cited by 85 (3 self)
- Add to MetaCart
(Show Context)
Realistic behavior of deformable objects is essential for many applications such as simulation for surgical training. Existing techniques of deformable modeling for real time simulation have either used approximate methods that are not physically accurate or linear methods that do not produce reasonable global behavior. Nonlinear finite element methods (FEM) are globally accurate, but conventional FEM is not real time. In this paper, we apply nonlinear FEM using mass lumping to produce a diagonal mass matrix that allows real time computation. Adaptive meshing is necessary to provide sufficient detail where required while minimizing unnecessary computation. We propose a scheme for mesh adaptation based on an extension of the progressive mesh concept, which we call dynamic progressive meshes. 1.
Adaptive TetraPuzzles: Efficient Out-of-Core Construction and Visualization of Gigantic Multiresolution Polygonal Models
- ACM Transactions on Graphics
, 2004
"... We describe an efficient technique for out-of-core construction and accurate view-dependent visualization of very large surface models. The method uses a regular conformal hierarchy of tetrahedra to spatially partition the model. Each tetrahedral cell contains a precomputed simplified version of the ..."
Abstract
-
Cited by 83 (32 self)
- Add to MetaCart
We describe an efficient technique for out-of-core construction and accurate view-dependent visualization of very large surface models. The method uses a regular conformal hierarchy of tetrahedra to spatially partition the model. Each tetrahedral cell contains a precomputed simplified version of the original model, represented using cache coherent indexed strips for fast rendering. The representation is constructed during a fine-to-coarse simplification of the surface contained in diamonds (sets of tetrahedral cells sharing their longest edge). The construction preprocess operates out-ofcore and parallelizes nicely. Appropriate boundary constraints are introduced in the simplification to ensure that all conforming selective subdivisions of the tetrahedron hierarchy lead to correctly matching surface patches. For each frame at runtime, the hierarchy is traversed coarse-to-fine to select diamonds of the appropriate resolution given the view parameters. The resulting system can interatively render high quality views of out-of-core models of hundreds of millions of triangles at over 40Hz (or 70M triangles/s) on current commodity graphics platforms.
BDAM – batched dynamic adaptive meshes for high performance terrain visualization
- Computer Graphics Forum
, 2003
"... This paper describes an efficient technique for out-of-core rendering and management of large textured terrain surfaces. The technique, called Batched Dynamic Adaptive Meshes (BDAM) , is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular pa ..."
Abstract
-
Cited by 82 (14 self)
- Add to MetaCart
This paper describes an efficient technique for out-of-core rendering and management of large textured terrain surfaces. The technique, called Batched Dynamic Adaptive Meshes (BDAM) , is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular patches for the geometry. These small patches are TINs and are constructed and optimized off-line with high quality simplification and tristripping algorithms. Hierarchical view frustum culling and view-dependent texture and geometry refinement is performed at each frame through a stateless traversal algorithm. Thanks to the batched CPU/GPU communication model, the proposed technique is not processor intensive and fully harnesses the power of current graphics hardware. Both preprocessing and rendering exploit out-of-core techniques to be fully scalable and to manage large terrain datasets.
Out-of-Core Compression for Gigantic Polygon Meshes
, 2003
"... Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32-bit desktop PCs. In this paper we propose an out-of-core mesh compression technique that converts such gigantic ..."
Abstract
-
Cited by 81 (23 self)
- Add to MetaCart
Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32-bit desktop PCs. In this paper we propose an out-of-core mesh compression technique that converts such gigantic meshes into a streamable, highly compressed representation. During decompression only a small portion of the mesh needs to be kept in memory at any time. As full connectivity information is available along the decompression boundaries, this provides seamless mesh access for incremental in-core processing on gigantic meshes. Decompression speeds are CPU-limited and exceed one million vertices and two million triangles per second on a 1.8 GHz Athlon processor.