Results 1  10
of
13
SILT: A MemoryEfficient, HighPerformance KeyValue Store
 In Proc. 23rd ACM SOSP, Cascias
, 2011
"... SILT (Small Index Large Table) is a memoryefficient, highperformance keyvalue store system based on flash storage that scales to serve billions of keyvalue items on a single node. It requires only 0.7 bytes of DRAM per entry and retrieves key/value pairs using on average 1.01 flash reads each. S ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
SILT (Small Index Large Table) is a memoryefficient, highperformance keyvalue store system based on flash storage that scales to serve billions of keyvalue items on a single node. It requires only 0.7 bytes of DRAM per entry and retrieves key/value pairs using on average 1.01 flash reads each. SILT combines new algorithmic and systems techniques to balance the use of memory, storage, and computation. Our contributions include: (1) the design of three basic keyvalue stores each with a different emphasis on memoryefficiency and writefriendliness; (2) synthesis of the basic keyvalue stores to build a SILT keyvalue store system; and (3) an analytical model for tuning system parameters carefully to meet the needs of different workloads. SILT requires one to two orders of magnitude less memory to provide comparable throughput to current highperformance keyvalue systems on a commodity desktop system with flash storage.
Compact representations of simplicial meshes in two and three dimensions
 International Journal of Computational Geometry and Applications
, 2003
"... We describe data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently. Our data structure requires about a factor of five less memory than the most efficient standard data structures for triangular or tetrahedral meshes, while efficiently su ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
We describe data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently. Our data structure requires about a factor of five less memory than the most efficient standard data structures for triangular or tetrahedral meshes, while efficiently supporting traversal among simplices, storing data on simplices, and insertion and deletion of simplices. Our implementation of the data structures uses about 5 bytes/triangle in two dimensions (2D) and 7.5 bytes/tetrahedron in three dimensions (3D). We use the data structures to implement 2D and 3D incremental algorithms for generating a Delaunay mesh. The 3D algorithm can generate 100 Million tetrahedrons with 1 Gbyte of memory, including the space for the coordinates and all data used by the algorithm. The runtime of the algorithm is as fast as Shewchuk’s Pyramid code, the most efficient we know of, and uses a factor of 3.5 less memory overall. 1
Succinct Trees in Practice
"... We implement and compare the major current techniques for representing general trees in succinct form. This is important because a general tree of n nodes is usually represented in pointer form, requiring O(n log n) bits, whereas the succinct representations we study require just 2n + o(n) bits and ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
We implement and compare the major current techniques for representing general trees in succinct form. This is important because a general tree of n nodes is usually represented in pointer form, requiring O(n log n) bits, whereas the succinct representations we study require just 2n + o(n) bits and carry out many sophisticated operations in constant time. Yet, there is no exhaustive study in the literature comparing the practical magnitudes of the o(n)space and the O(1)time terms. The techniques can be classified into three broad trends: those based on BP (balanced parentheses in preorder), those based on DFUDS (depthfirst unary degree sequence), and those based on LOUDS (levelordered unary degree sequence). BP and DFUDS require a balanced parentheses representation that supports the core operations
Compact Dictionaries for VariableLength Keys and Data, with Applications
, 2007
"... We consider the problem of maintaining a dynamic dictionary T of keys and associated data for which both the keys and data are bit strings that can vary in length from zero up to the length w of a machine word. We present a data structure for this variablebitlength dictionary problem that supports ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We consider the problem of maintaining a dynamic dictionary T of keys and associated data for which both the keys and data are bit strings that can vary in length from zero up to the length w of a machine word. We present a data structure for this variablebitlength dictionary problem that supports constant time lookup and expected amortized constant time insertion and deletion. It uses O(m + 3n − n log 2 n) bits, where n is the number of elements in T, and m is the total number of bits across all strings in T (keys and data). Our dictionary uses an array A[1... n] in which locations store variablebitlength strings. We present a data structure for this variablebitlength array problem that supports worstcase constanttime lookups and updates and uses O(m + n) bits, where m is the total number of bits across all strings stored in A. The motivation for these structures is to support applications for which it is helpful to efficiently store short varying length bit strings. We present several applications, including representations for semidynamic graphs, order queries on integers sets, cardinal trees with varying cardinality, and simplicial meshes of d dimensions. These results either generalize or simplify previous results.
Compact Data Structures with Fast Queries
, 2005
"... Many applications dealing with large data structures can benefit from keeping them in compressed form. Compression has many benefits: it can allow a representation to fit in main memory rather than swapping out to disk, and it improves cache performance since it allows more data to fit into the c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Many applications dealing with large data structures can benefit from keeping them in compressed form. Compression has many benefits: it can allow a representation to fit in main memory rather than swapping out to disk, and it improves cache performance since it allows more data to fit into the cache. However, a data structure is only useful if it allows the application to perform fast queries (and updates) to the data.
SpaceEfficient SHARCRouting
, 2009
"... Accelerating the computation of quickest paths in road networks has been undergoing a rapid development during the last years. The breakthrough idea for handling road networks with tens of millions of nodes was the concept of shortcuts, i.e., additional arcs that represent long paths in the input. V ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Accelerating the computation of quickest paths in road networks has been undergoing a rapid development during the last years. The breakthrough idea for handling road networks with tens of millions of nodes was the concept of shortcuts, i.e., additional arcs that represent long paths in the input. Very recently, this concept has been transferred to timedependent road networks where travel times on arcs are given by functions. Unfortunately, the concept of shortcuts yields a high increase in space consumption for timedependent road networks since the travel time functions assigned to the shortcuts may become quite complex. In this work, we present how the space overhead induced by timedependent SHARC, a technique relying on shortcuts as well, can be reduced significantely. As a result, we are able to reduce the overhead stemming from SHARC by a factor of up to 11.5 for the price of a loss in query performance of a factor of 4. The methods we present allow a flexible tradeoff between space consumption and query performance.
Hierarchical Diagonal Blocking and Precision Reduction Applied to Combinatorial Multigrid ∗
"... Abstract—Memory bandwidth is a major limiting factor in the scalability of parallel iterative algorithms that rely on sparse matrixvector multiplication (SpMV). This paper introduces Hierarchical Diagonal Blocking (HDB), an approach which we believe captures many of the existing optimization techni ..."
Abstract
 Add to MetaCart
Abstract—Memory bandwidth is a major limiting factor in the scalability of parallel iterative algorithms that rely on sparse matrixvector multiplication (SpMV). This paper introduces Hierarchical Diagonal Blocking (HDB), an approach which we believe captures many of the existing optimization techniques for SpMV in a common representation. Using this representation in conjuction with precisionreduction techniques, we develop and evaluate highperformance SpMV kernels. We also study the implications of using our SpMV kernels in a complete iterative solver. Our method of choice is a Combinatorial Multigrid solver that can fully utilize our fastest reducedprecision SpMV kernel without sacrificing the quality of the solution. We provide extensive empirical evaluation of the effectiveness of the approach on a variety of benchmark matrices, demonstrating substantial speedups on all matrices considered. I.
Theory General Terms
"... SILT (Small Index Large Table) is a memoryefficient, highperformance keyvalue store system based on flash storage that scales to serve billions of keyvalue items on a single node. It requires only 0.7 bytes of DRAM per entry and retrieves key/value pairs using on average 1.01 flash reads each. SI ..."
Abstract
 Add to MetaCart
SILT (Small Index Large Table) is a memoryefficient, highperformance keyvalue store system based on flash storage that scales to serve billions of keyvalue items on a single node. It requires only 0.7 bytes of DRAM per entry and retrieves key/value pairs using on average 1.01 flash reads each. SILT combines new algorithmic and systems techniques to balance the use of memory, storage, and computation. Our contributions include: (1) the design of three basic keyvalue stores each with a different emphasis on memoryefficiency and writefriendliness; (2) synthesis of the basic keyvalue stores to build a SILT keyvalue store system; and (3) an analytical model for tuning system parameters carefully to meet the needs of different workloads. SILT requires one to two orders of magnitude less memory to provide comparable throughput to current highperformance keyvalue systems on a commodity desktop system with flash storage.
Fast Semantic AttributeRoleBased Access Control (ARBAC)
"... develop a semantic platformindependent framework enabling information originators and security administrators to specify access rights to information consistently and completely, in a social network environment, and then to rigorously enforce that specification. We use a modified ARBAC security mod ..."
Abstract
 Add to MetaCart
develop a semantic platformindependent framework enabling information originators and security administrators to specify access rights to information consistently and completely, in a social network environment, and then to rigorously enforce that specification. We use a modified ARBAC security model and an OWL ontology with additional rules in a logic programming and Java framework to express access policy, going beyond the limitations of previous attempts in this vein. We also experimented with knowledge compilation optimizing techniques that allow access policy constraint checking to be implemented in realtime, via a bitvector encoding that can be used for rapid runtime reasoning. Index Terms—access control policy, attributebased, rolebased, Semantic Web, logic programming, knowledge