Results 11  20
of
23
Onthefly maintenance of seriesparallel relationships in forkjoin multithreaded programs
 IN PROCEEDINGS OF THEACM SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES (SPAA
, 2004
"... A key capability of datarace detectors is to determine whether one thread executes logically in parallel with another or whether the threads must operate in series. This paper provides two algorithms, one serial and one parallel, to maintain seriesparallel (SP) relationships “on the fly” for fork ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
A key capability of datarace detectors is to determine whether one thread executes logically in parallel with another or whether the threads must operate in series. This paper provides two algorithms, one serial and one parallel, to maintain seriesparallel (SP) relationships “on the fly” for forkjoin multithreaded programs. The serial SPorder algorithm runs in O(1) amortized time per operation. In contrast, the previously best algorithm requires a time per operation that is proportional to Tarjan’s functional inverse of Ackermann’s function. SPorder employs an ordermaintenance data structure that allows us to implement a more efficient “EnglishHebrew ” labeling scheme than was used in earlier race detectors, which immediately yields an improved determinacyrace detector. In particular, any forkjoin program running in T1 time on a single processor can be checked on the fly for determinacy races in
LTree: a Dynamic Labeling Structure for Ordered XML Data
, 2004
"... With the ever growing use of XML as a data representation format, we see an increasing need for robust, high performance XML database systems. While most of the recent work focuses on efficient XML query processing, XML databases also need to support efficient updates. To speed up query processing ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
With the ever growing use of XML as a data representation format, we see an increasing need for robust, high performance XML database systems. While most of the recent work focuses on efficient XML query processing, XML databases also need to support efficient updates. To speed up query processing, various labeling schemes have been proposed. However, the vast majority of these schemes have poor update performance. In this paper, we introduce a dynamic labeling structure for XML data: LTree and its orderpreserving labeling scheme with O(log n) amortized update cost and O(log n) bits per label. LTree has good performance on updates without compromising the performance of query processing. We present the update algorithm for LTree and analyze its complexity.
The Temporal Precedence Problem
 Algorithmica
, 1998
"... In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. Simply stated, the problem is to efficiently support two operations: insert and precedes. The operation insert(a) introduces a new element a, while precedes(a; b) returns true iff element a was inserted before element b temporally. We provide a solution to the problem with worstcase time complexity O(lg lg n) per operation, where n is the number of elements inserted. We also demonstrate that the problem has a lower bound of \Omega\Gammaf/ lg n) on pointer machines. Thus the proposed scheme is optimal on pointer machines. Keywords: Algorithms, Dynamic Data Structures, Complexity. 1 Introduction In this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Informally, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. The problem is related to ...
An Adaptive PackedMemory Array
"... The packedmemory array (PMA) is a data structure that maintains a dynamic set of N elements in sorted order in a Θ(N)sized array. The idea is to intersperse Θ(N) empty spaces or gaps among the elements so that only a small number of elements need to be shifted around on an insert or delete. Becaus ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The packedmemory array (PMA) is a data structure that maintains a dynamic set of N elements in sorted order in a Θ(N)sized array. The idea is to intersperse Θ(N) empty spaces or gaps among the elements so that only a small number of elements need to be shifted around on an insert or delete. Because the elements are stored physically in sorted order in memory or on disk, the PMA can be used to support extremely efficient range queries. Specifically, the cost to scan L consecutive elements is O(1 + L/B) memory transfers. This paper gives the first adaptive packedmemory array (APMA), which automatically adjusts to the input pattern. Like the traditional PMA, any pattern of updates costs only O(log 2 N) amortized element moves and O(1 + (log 2 N)/B) amortized memory transfers per update. However, the APMA performs even better on many common input distributions achieving only O(logN) amortized element moves and O(1 + (logN)/B) amortized memory transfers. The paper analyzes sequential inserts, where the insertions are to the front of the APMA, hammer inserts, where the insertions “hammer ” on one part of the APMA, random inserts, where the insertions are after random elements in the APMA, and bulk inserts, where for constant α ∈ [0,1], N α elements are inserted after random elements in the APMA. The paper then gives simulation results that are consistent with the asymptotic bounds. For sequential insertions of roughly 1.4 million elements, the APMA has four times fewer element moves per insertion than the traditional PMA and running times that are more than seven times faster.
A Simple Dynamic Algorithm for Maintaining a Dominator Tree
, 1996
"... We present a simple algorithm which maintains the dominator tree for an arbitrary flow graph during a sequence of i edge insertions interspersed with q queries as "does x dominate y?". The complexity of the algorithm is O(q + m minfi; ng), where m and n respectively are the number of edges and nod ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We present a simple algorithm which maintains the dominator tree for an arbitrary flow graph during a sequence of i edge insertions interspersed with q queries as "does x dominate y?". The complexity of the algorithm is O(q + m minfi; ng), where m and n respectively are the number of edges and nodes in the flow graph after the i 0 th edge insertion. This improves the former best results from O(q log n +m i log n) in [12] and O(q n +m i) in [5]. Furthermore, we show that the complexity of our algorithm for a single edge insertion is bounded by those nodes which no longer will be dominated by the same set of nodes. 1 Introduction Dominator trees are used in control flow analysis [1, 6, 7, 8]. Algorithms for finding dominator trees for control flow graphs are given in [9, 10, 11] and the algorithm in [9] is linear. Recently dynamic algorithms [3, 5, 12] 1 have been presented for maintaining dominator trees, but the complexities of these algorithms are worse or just as good as...
Efficient Algorithms for the Temporal Precedence Problem
 Information Processing Letters
, 1998
"... this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Intuitively, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. We are not aware of any study reg ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
this paper we study the complexity of what we call the Temporal Precedence (T P) Problem on pointer machines. Intuitively, the problem is to manage the dynamic insertion of elements, with the ability of determining, given two elements, which one was inserted first. We are not aware of any study regarding the complexity of this problem on pointer machines.
Fully Persistent BTrees
"... We present I/Oefficient fully persistent BTrees that support range searches at any version in O(logB n + t/B) I/Os and updates at any version in O(logB n + log2 B) amortized I/Os, using space O(m/B) disk blocks. By n we denote the number of elements in the accessed version, by m the total number o ..."
Abstract
 Add to MetaCart
We present I/Oefficient fully persistent BTrees that support range searches at any version in O(logB n + t/B) I/Os and updates at any version in O(logB n + log2 B) amortized I/Os, using space O(m/B) disk blocks. By n we denote the number of elements in the accessed version, by m the total number of updates, by t the size of the query’s output, and by B the disk block size. The result improves the previous fully persistent BTrees of Lanka and Mays by a factor of O(logB m) for the range query complexity and O(logB n) for the update complexity. To achieve the result, we first present a new BTree implementation that supports searches and updates in O(logB n) I/Os, using O(n/B) blocks of space. Moreover, every update makes in the worst case a constant number of modifications to the data structure. We make these BTrees fully persistent using an I/Oefficient method for full persistence that is inspired by the nodesplitting method of Driscoll et al. The method we present is interesting in its own right and can be applied to any external memory pointer based data structure with maximum indegree din bounded by a constant and outdegree bounded by O(B), where every node occupies a constant number of blocks on disk. The I/Ooverhead per modification to the ephemeral structure is O(din log2 B) amortized I/Os, and the space overhead is O(din/B) amortized blocks. Access to a field of an ephemeral block is supported in O(log2 din) worst case I/Os. 1
Brief Announcement: New Bounds for the Controller Problem
, 2009
"... The (M, W)controller, originally studied by Afek, Awerbuch, Plotkin, and Saks, is a basic distributed tool that provides an abstraction for managing the consumption of a global resource in a distributed dynamic network. We establish new bounds on the message complexity of this tool based on a surpr ..."
Abstract
 Add to MetaCart
The (M, W)controller, originally studied by Afek, Awerbuch, Plotkin, and Saks, is a basic distributed tool that provides an abstraction for managing the consumption of a global resource in a distributed dynamic network. We establish new bounds on the message complexity of this tool based on a surprising connection between the controller problem and the monotonic labeling problem.