Results 1 - 10
of
195
Edgebreaker: Connectivity compression for triangle meshes
- IEEE Transactions on Visualization and Computer Graphics
, 1999
"... Edgebreaker is a simple scheme for compressing the triangle/vertex incidence graphs (sometimes called connectivity or topology) of three-dimensional triangle meshes. Edgebreaker improves upon the worst case storage required by previously reported schemes, most of which require O(nlogn) bits to store ..."
Abstract
-
Cited by 298 (24 self)
- Add to MetaCart
to store the incidence graph of a mesh of n triangles. Edgebreaker requires only 2n bits or less for simple meshes and can also support fully general meshes by using additional storage per handle and hole. Edgebreaker’s compression and decompression processes perform the same traversal of the mesh from one
High Performance and Scalable GPU Graph Traversal
, 2011
"... Breadth-first search (BFS) is a core primitive for graph traversal and a basis for many higher-level graph analysis algorithms. It is also representative of a class of parallel computations whose memory accesses and work distribution are both irregular and data-dependent. Recent work has demonstrate ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
demonstrated the plausibility of GPU sparse graph traversal, but has tended to focus on asymptotically inefficient algorithms that perform poorly on graphs with non-trivial diameter. We present a BFS parallelization focused on fine-grained task management that achieves an asymptotically optimal O(|V|+|E|) work
A Power Characterization and Management of GPU Graph Traversal
"... Graph analysis is a fundamental building block in numerous computing domains. Recent research has looked into harness-ing GPUs to achieve necessary throughput goals. However, comparatively little attention has been paid to improving the power-constrained performance of these applications. Through fi ..."
Abstract
- Add to MetaCart
. Based on this characterization, we propose and evaluate a power management algorithm to maximize power cap efficiency, or the performance under a fixed power cap. Across a range of benchmark graphs, we demonstrate power cap efficiency improvements averaging 15.56 % on a state-of-the-art GPU.
GPU Accelerated Voxel Traversal using the Prediction Buffer
"... Figure 1: The rendered iso-surfaces and the corresponding prediction buffers for two datasets: Bonsai and Visible Human R ○ male. The ever increasing size of data sets for scientific and medical visualization demands new isosurface volume rendering techniques to provide interactivity for the large d ..."
Abstract
- Add to MetaCart
block size, shared memory usage, and texture versus global memory. These factors were carefully considered to efficiently map the ray-casting volume rendering algorithm and the traversal technique to the GPU providing a high performance implementation.
Compressing the graph structure of the Web
- Proceedings of the Data Compression Conference (DCC), Snowbird, UT
, 2001
"... Abstract A large amount of research has recently focused on the graph structure (or link structure) of the World Wide Web. This structure has proven to be extremely useful for improving the performance of search engines and other tools for navigating the web. However, since the graphs in these scen ..."
Abstract
-
Cited by 57 (2 self)
- Add to MetaCart
Abstract A large amount of research has recently focused on the graph structure (or link structure) of the World Wide Web. This structure has proven to be extremely useful for improving the performance of search engines and other tools for navigating the web. However, since the graphs
Rapid Multipole Graph Drawing on the GPU
"... Abstract. As graphics processors become powerful, ubiquitous and eas-ier to program, they have also become more amenable to general purpose high-performance computing, including the computationally expensive task of drawing large graphs. This paper describes a new parallel anal-ysis of the multipole ..."
Abstract
- Add to MetaCart
Abstract. As graphics processors become powerful, ubiquitous and eas-ier to program, they have also become more amenable to general purpose high-performance computing, including the computationally expensive task of drawing large graphs. This paper describes a new parallel anal
Graph Compression
"... Abstract. Graphs form the foundation of many real-world datasets. At the same time, the size of graphs presents a big obstacle to understand the essential information they contain. In this report, I mainly review the framework in article [1] for compressing large graphs. It can be used to improve vi ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Abstract. Graphs form the foundation of many real-world datasets. At the same time, the size of graphs presents a big obstacle to understand the essential information they contain. In this report, I mainly review the framework in article [1] for compressing large graphs. It can be used to improve
Scalable and High Performance Betweenness Centrality on the GPU
- in Proceedings of the 26th ACM/IEEE International Conference on High Performance Computing, Networking, Storage, and Analysis (SC
, 2014
"... Abstract—Graphs that model social networks, numerical sim-ulations, and the structure of the Internet are enormous and cannot be manually inspected. A popular metric used to analyze these networks is betweenness centrality, which has applications in community detection, power grid contingency analys ..."
Abstract
-
Cited by 7 (4 self)
- Add to MetaCart
analysis, and the study of the human brain. However, these analyses come with a high computational cost that prevents the examination of large graphs of interest. Prior GPU implementations suffer from large local data struc-tures and inefficient graph traversals that limit scalability and per-formance
Automatically enhancing locality for tree traversals with traversal splicing
- In Proceedings of the 2012 ACM international
, 2012
"... Generally applicable techniques for improving temporal locality in irregular programs, which operate over pointerbased data structures such as trees and graphs, are scarce. Focusing on a subset of irregular programs, namely, tree traversal algorithms like Barnes-Hut and nearest neighbor, previous wo ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
work has proposed point blocking, a technique analogous to loop tiling in regular programs, to improve locality. However point blocking is highly dependent on point sorting, a technique to reorder points so that consecutive points will have similar traversals. Performing this a priori sort requires
A Scalable System for Consistently Caching Dynamic Web Data
, 1999
"... This paper presents a new approach for consistently caching dynamic Web data in order to improve performance. Our algorithm, which we call Data Update Propagation (DUP), maintains data dependence informationbetween cached objects and the underlying data which affect their values in a graph. When the ..."
Abstract
-
Cited by 154 (16 self)
- Add to MetaCart
This paper presents a new approach for consistently caching dynamic Web data in order to improve performance. Our algorithm, which we call Data Update Propagation (DUP), maintains data dependence informationbetween cached objects and the underlying data which affect their values in a graph. When
Results 1 - 10
of
195