Results 11  20
of
130
Adaptive TetraPuzzles: Efficient OutofCore Construction and Visualization of Gigantic Multiresolution Polygonal Models
 ACM Transactions on Graphics
, 2004
"... We describe an efficient technique for outofcore construction and accurate viewdependent visualization of very large surface models. The method uses a regular conformal hierarchy of tetrahedra to spatially partition the model. Each tetrahedral cell contains a precomputed simplified version of the ..."
Abstract

Cited by 59 (23 self)
 Add to MetaCart
We describe an efficient technique for outofcore construction and accurate viewdependent visualization of very large surface models. The method uses a regular conformal hierarchy of tetrahedra to spatially partition the model. Each tetrahedral cell contains a precomputed simplified version of the original model, represented using cache coherent indexed strips for fast rendering. The representation is constructed during a finetocoarse simplification of the surface contained in diamonds (sets of tetrahedral cells sharing their longest edge). The construction preprocess operates outofcore and parallelizes nicely. Appropriate boundary constraints are introduced in the simplification to ensure that all conforming selective subdivisions of the tetrahedron hierarchy lead to correctly matching surface patches. For each frame at runtime, the hierarchy is traversed coarsetofine to select diamonds of the appropriate resolution given the view parameters. The resulting system can interatively render high quality views of outofcore models of hundreds of millions of triangles at over 40Hz (or 70M triangles/s) on current commodity graphics platforms.
On External Memory Graph Traversal
 IN PROC. ACMSIAM SYMP. ON DISCRETE ALGORITHMS
, 2000
"... We describe a new external memory data structure, the buffered repository tree, and use it to provide the first nontrivial external memory algorithm for directed breadthfirst search (BFS) and an improved external algorithm for directed depthfirst search. We also demonstrate the equivalence of var ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
We describe a new external memory data structure, the buffered repository tree, and use it to provide the first nontrivial external memory algorithm for directed breadthfirst search (BFS) and an improved external algorithm for directed depthfirst search. We also demonstrate the equivalence of various formulations of external undirected BFS, and we use these to give the first I/Ooptimal BFS algorithm for undirected trees.
Compressing the graph structure of the web
 In IEEE Data Compression Conference (DCC
, 2001
"... A large amount of research has recently focused on the graph structure (or link structure) of the World Wide Web. This structure has proven to be extremely useful for improving the performance of search engines and other tools for navigating the web. However, since the graphs in these scenarios invo ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
A large amount of research has recently focused on the graph structure (or link structure) of the World Wide Web. This structure has proven to be extremely useful for improving the performance of search engines and other tools for navigating the web. However, since the graphs in these scenarios involve hundreds of millions of nodes and even more edges, highly spaceefficient data structures are needed to fit the data in memory. A first step in this direction was done by the DEC Connectivity Server, which stores the graph in compressed form. In this paper, we describe techniques for compressing the graph structure of the web, and give experimental results of a prototype implementation. We attempt to exploit a variety of different sources of compressibility of these graphs and of the associated set of URLs in order to obtain good compression performance on a large web graph. 1
Outofcore algorithms for scientific visualization and computer graphics
 In Visualization’02 Course Notes
, 2002
"... Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in re ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of highresolution 3D scanners, and the need to visualize ASCIsize (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, which penalizes algorithms that do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing outofcore (and often cachefriendly) techniques. This paper surveys fundamental issues, current problems, and unresolved questions, and aims to provide graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Keywords: Outofcore algorithms, scientific visualization, computer graphics, interactive rendering, volume rendering, surface simplification.
Efficient External Memory Algorithms by Simulating CoarseGrained Parallel Algorithms
, 2003
"... External memory (EM) algorithms are designed for largescale computational problems in which the size of the internal memory of the computer is only a small fraction of the problem size. Typical EM algorithms are specially crafted for the EM situation. In the past, several attempts have been made to ..."
Abstract

Cited by 41 (10 self)
 Add to MetaCart
External memory (EM) algorithms are designed for largescale computational problems in which the size of the internal memory of the computer is only a small fraction of the problem size. Typical EM algorithms are specially crafted for the EM situation. In the past, several attempts have been made to relate the large body of work on parallel algorithms to EM, but with limited success. The combination of EM computing, on multiple disks, with multiprocessor parallelism has been posted as a challenge by the ACMWorking Group on Storage I/O for LargeScale Computing.
Efficient ExternalMemory Data Structures and Applications
, 1996
"... In this thesis we study the Input/Output (I/O) complexity of largescale problems arising e.g. in the areas of database systems, geographic information systems, VLSI design systems and computer graphics, and design I/Oefficient algorithms for them. A general theme in our work is to design I/Oeffic ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
In this thesis we study the Input/Output (I/O) complexity of largescale problems arising e.g. in the areas of database systems, geographic information systems, VLSI design systems and computer graphics, and design I/Oefficient algorithms for them. A general theme in our work is to design I/Oefficient algorithms through the design of I/Oefficient data structures. One of our philosophies is to try to isolate all the I/O specific parts of an algorithm in the data structures, that is, to try to design I/O algorithms from internal memory algorithms by exchanging the data structures used in internal memory with their external memory counterparts. The results in the thesis include a technique for transforming an internal memory tree data structure into an external data structure which can be used in a batched dynamic setting, that is, a setting where we for example do not require that the result of a search operation is returned immediately. Using this technique we develop batched dynamic external versions of the (onedimensional) rangetree and the segmenttree and we develop an external priority queue. Following our general philosophy we show how these structures can be used in standard internal memory sorting algorithms
STXXL: Standard template library for XXL data sets
 In: Proc. of ESA 2005. Volume 3669 of LNCS
, 2005
"... for processing huge data sets that can fit only on hard disks. It supports parallel disks, overlapping between disk I/O and computation and it is the first I/Oefficient algorithm library that supports the pipelining technique that can save more than half of the I/Os. STXXL has been applied both in ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
for processing huge data sets that can fit only on hard disks. It supports parallel disks, overlapping between disk I/O and computation and it is the first I/Oefficient algorithm library that supports the pipelining technique that can save more than half of the I/Os. STXXL has been applied both in academic and industrial environments for a range of problems including text processing, graph algorithms, computational geometry, gaussian elimination, visualization, and analysis of microscopic images, differential cryptographic analysis, etc. The performance of STXXL and its applications is evaluated on synthetic and realworld inputs. We present the design of the library, how its performance features are supported, and demonstrate how the library integrates with STL. KEY WORDS: very large data sets; software library; C++ standard template library; algorithm engineering 1.
The Link Database: Fast Access to Graphs of the Web
"... ... graph where URLs are nodes and hyperlinks are directed edges. The Link Database provides fast access to the hyperlinks. To support a wide range of graph algorithms, we find it important to fit the Link Database into memory. In the first version of the Link Database, we achieved this fit by using ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
... graph where URLs are nodes and hyperlinks are directed edges. The Link Database provides fast access to the hyperlinks. To support a wide range of graph algorithms, we find it important to fit the Link Database into memory. In the first version of the Link Database, we achieved this fit by using machines with lots of memory (8GB), and storing each hyperlink in 32 bits. However, this approach was limited to roughly 100 million Web pages. This paper presents techniques to compress the links to accommodate larger graphs. Our techniques combine wellknown compression methods with methods that depend on the properties of the web graph. The first compression technique takes advantage of the fact that most hyperlinks on most Web pages point to other pages on the same host as the page itself. The second technique takes advantage of the fact that many pages on the same host share hyperlinks, that is, they tend to point to a common set of pages. Together, these techniques reduce space requirements to under 6 bits per link. While (de)compression adds latency to the hyperlink access time, we can still compute the strongly connected components of a 6 billionedge graph in under 20 minutes and run applications such as Kleinberg's HITS in real time. This paper describes our techniques for compressing the Link Database, and provides performance numbers for compression ratios and decompression speed.
I/OEfficient Scientific Computation Using TPIE
 In Proceedings of the Goddard Conference on Mass Storage Systems and Technologies, NASA Conference Publication 3340, Volume II
, 1995
"... In recent years, I/Oefficient algorithms for a wide variety of problems have appeared in the literature. Thus far, however, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to fill this void. It supports I/Oeff ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
In recent years, I/Oefficient algorithms for a wide variety of problems have appeared in the literature. Thus far, however, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to fill this void. It supports I/Oefficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only of explicit read and write calls, but also the complex memory management that must be performed for I/Oefficient computation.
A Transparent Parallel I/O Environment
 In Proc. 1994 DAGS Symposium on Parallel Computation
, 1994
"... We describe TPIE, a Transparent Parallel I/O Environment. TPIE is a system designed to bridge the gap between current theoretical knowledge about the construction of I/Ooptimal algorithms on parallel disk systems and the design and implementation of parallel I/O systems. We discuss the design of ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
We describe TPIE, a Transparent Parallel I/O Environment. TPIE is a system designed to bridge the gap between current theoretical knowledge about the construction of I/Ooptimal algorithms on parallel disk systems and the design and implementation of parallel I/O systems. We discuss the design of TPIE and its interface, the structure of a typical implementation, applications of the system, our prototype, and future research directions. The initial goal of our work is a prototype system to demonstrate: 1) that optimal algorithms can be made to run efficiently on parallel I/O devices; and 2) that high level hardware independent interfaces to the I/O paradigms required to implement such algorithms can be provided to application programmers. The TPIE interface is designed to be portable across a variety of parallel hardware platforms; thus code that runs efficiently on one machine will run efficiently on others. Longer term goals for TPIE include extending the prototype in ways t...