Results 1  10
of
15
On optimistic methods for concurrency control
 ACM Transactions on Database Systems
, 1981
"... Most current approaches to concurrency control in database systems rely on locking of data objects as a control mechanism. In this paper, two families of nonlocking concurrency controls are presented. The methods used are “optimistic ” in the sense that they rely mainly on transaction backup as a co ..."
Abstract

Cited by 447 (0 self)
 Add to MetaCart
Most current approaches to concurrency control in database systems rely on locking of data objects as a control mechanism. In this paper, two families of nonlocking concurrency controls are presented. The methods used are “optimistic ” in the sense that they rely mainly on transaction backup as a control mechanism, “hoping ” that conflicts between transactions will not occur. Applications for which these methods should be more efficient than locking are discussed.
Efficient Locking for Concurrent Operations on BTrees
 ACM Transactions on Database Systems
, 1981
"... The Btree and its variants have been found to be highly useful (both theoretically and in practice) for storing large amounts ofinformation, especially on secondary storage devices. We examine the problem of overcoming the inherent difficulty of concurrent operations on such structures, using a pra ..."
Abstract

Cited by 153 (0 self)
 Add to MetaCart
The Btree and its variants have been found to be highly useful (both theoretically and in practice) for storing large amounts ofinformation, especially on secondary storage devices. We examine the problem of overcoming the inherent difficulty of concurrent operations on such structures, using a practical storage model. A single additional “link ” pointer in each node allows a process to easily recover from tree modifications performed by other concurrent processes. Our solution compares favorably with earlier solutions in that the locking scheme is simpler (no readlocks are used) and only a (small) constant number of nodes are locked by any update process at any given time. An informal correctness proof for our system is given,
The Performance of Concurrent Data Structure Algorithms
 Transactions on Database Systems
, 1994
"... This thesis develops a validated model of concurrent data structure algorithm performance, concentrating on concurrent Btrees. The thesis first develops two analytical tools, which are explained in the next two paragraphs, for the analysis. Yao showed that the space utilization of a Btree built fr ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
This thesis develops a validated model of concurrent data structure algorithm performance, concentrating on concurrent Btrees. The thesis first develops two analytical tools, which are explained in the next two paragraphs, for the analysis. Yao showed that the space utilization of a Btree built from random inserts is 69%. Assuming that nodes merge only when empty, we show that the utilization is 39% when the number of insert and delete operations is the same. However, if there are just 5% more inserts than deletes, then the utilization is at least 62%. In addition to the utilization, we calculate the probabilities of splitting and merging, important parameters for calculating concurrent Btree algorithm performance. We compare mergeatempty Btrees with mergeathalf Btrees. We conclude that mergeatempty Btrees have a slightly lower space utilization but a much lower restructuring rate than mergeathalf Btrees, making mergeatempty Btrees preferable for concurrent Btree algo...
BTrees with Relaxed Balance
 In Proceedings of the 9th International Parallel Processing Symposium
, 1993
"... Btrees with relaxed balance have been defined to facilitate fast updating on sharedmemory asynchronous parallel architectures. To obtain this, rebalancing has been uncoupled from the updating such that extensive locking can be avoided in connection with updates. We analyze Btrees with relaxed bal ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Btrees with relaxed balance have been defined to facilitate fast updating on sharedmemory asynchronous parallel architectures. To obtain this, rebalancing has been uncoupled from the updating such that extensive locking can be avoided in connection with updates. We analyze Btrees with relaxed balance, and prove that each update gives rise to at most blog a (N=2)c + 1 rebalancing operations, where a is the degree of the Btree, and N is the bound on its maximal size since it was last in balance. Assuming that the size of nodes are at least twice the degree, we prove that rebalancing can be performed in amortized constant time. So, in the long run, rebalancing is constant time on average, even if any particular update could give rise to logarithmic time rebalancing. We also prove that the amount of rebalancing done at any particular level decreases exponentially going from the leaves towards the root. This is important since the higher up in the tree a lock due to a rebalancing operat...
Performance of B+ Tree Concurrency Control Algorithms
 VLDB JOURNAL
, 1993
"... A number of algorithms have been proposed to access B+trees concurrently, but they are not well understood. In this article, we study the performance of various B+tree concurrency control algorithms using a detailed simulation model of B +tree operations in a centralized DBMS. Our study covers a ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
A number of algorithms have been proposed to access B+trees concurrently, but they are not well understood. In this article, we study the performance of various B+tree concurrency control algorithms using a detailed simulation model of B +tree operations in a centralized DBMS. Our study covers a wide range of data contention situations and resource conditions. In addition, based on the performance of the set of B +tree concurrency control algorithms, which includes one new algorithm, we make projections regarding the performance of other algorithms in the literature. Our results indicate that algorithms with updaters that lockcouple using exclusive locks perform poorly as compared to those that permit more optimistic index descents. In particular, the Blink algorithms are seen to provide the most concurrency and the best overall performance. Finally, we demonstrate the need for a highly concurrent longterm lock holding strategy to obtain the full benefits of a highly concurrent algorithm for index operations.
Amortization Results for Chromatic Search Trees, with an Application to Priority Queues
, 1997
"... this paper, we prove that only an amortized constant amount of rebalancing is necessary after an update in a chromatic search tree. We also prove that the amount of rebalancing done at any particular level decreases exponentially, going from the leaves toward the root. These results imply that, in p ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
this paper, we prove that only an amortized constant amount of rebalancing is necessary after an update in a chromatic search tree. We also prove that the amount of rebalancing done at any particular level decreases exponentially, going from the leaves toward the root. These results imply that, in principle, a linear number of processes can access the tree simultaneously. We have included one interesting application of chromatic trees. Based on these trees, a priority queue with possibilities for a greater degree of parallelism than previous proposals can be implemented. ] 1997 Academic Press 1.
Chromatic Priority Queues
, 1994
"... We investigate the problem of implementing a priority queue to be used in a parallel environment, where asynchronous processes have access to a shared memory. Chromatic trees are a generalization of redblack trees appropriate for applications in such an environment, and it turns out that an appropr ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We investigate the problem of implementing a priority queue to be used in a parallel environment, where asynchronous processes have access to a shared memory. Chromatic trees are a generalization of redblack trees appropriate for applications in such an environment, and it turns out that an appropriate priority queue can be obtained via minor modifications of chromatic trees. As opposed to earlier proposals, our deletemin operation is worstcase constant time, and insert is carried out as a fast search and constant time update, followed by an amortized constant number of rebalancing operations, which can be performed later by other processes, one at a time. If a general delete is desired, it can be implemented as a fast search and constant time update, followed by an amortized constant number of rebalancing operations, which again can be performed later by other processes, one at time. The amortization results here extend the results previously obtained for chromatic search trees. Sin...
RankBalanced Trees
"... Abstract. Since the invention of AVL trees in 1962, a wide variety of ways to balance binary search trees have been proposed. Notable are redblack trees, in which bottomup rebalancing after an insertion or deletion takes O(1) amortized time and O(1) rotations worstcase. But the design space of ba ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. Since the invention of AVL trees in 1962, a wide variety of ways to balance binary search trees have been proposed. Notable are redblack trees, in which bottomup rebalancing after an insertion or deletion takes O(1) amortized time and O(1) rotations worstcase. But the design space of balanced trees has not been fully explored. We introduce the rankbalanced tree, a relaxation of AVL trees. Rankbalanced trees can be rebalanced bottomup after an insertion or deletion in O(1) amortized time and at most two rotations worstcase, in contrast to redblack trees, which need up to three rotations per deletion. Rebalancing can also be done topdown with fixed lookahead in O(1) amortized time. Using a novel analysis that relies on an exponential potential function, we show that both bottomup and topdown rebalancing modify nodes exponentially infrequently in their heights. 1
Implementing Distributed Search Structures
, 1992
"... Distributed search structures are useful for parallel databases and in maintaining distributed storage systems. Although a considerable amount of research has been done on developing parallel search structures on sharedmemory multiprocessors, little has been done on the development of search str ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Distributed search structures are useful for parallel databases and in maintaining distributed storage systems. Although a considerable amount of research has been done on developing parallel search structures on sharedmemory multiprocessors, little has been done on the development of search structures for distributedmemory systems. In this paper we discuss some issues in the design and implementation of distributed Btrees, such as methods for lowoverhead synchronization of tree restructuring and node mobility. One goal of this work is to implement a databalanced dictionary which allows for balanced processor and space utilization. We present an algorithm for dynamic dataload balancing which uses node mobility mechanisms. We also study the effects that balancing and not balancing data have on the structure of a distributed Btree. Finally, we demonstrate that our loadbalancing algorithm distributes the nodes of a Btree very well. Keywords: Data Structures, Distributed...
Transaction Synchronisation In Object Bases
, 1988
"... We propose a formal model of concurrency control in object bases. An object base is like a database except that information is represented in terms of "objects" that encapsulate both data and the procedures through which the data can be manipulated. The model generalises the classical model of datab ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose a formal model of concurrency control in object bases. An object base is like a database except that information is represented in terms of "objects" that encapsulate both data and the procedures through which the data can be manipulated. The model generalises the classical model of database concurrency control: it allows for nested transactions (as opposed to flat transactions) which may issue arbitrary operations (as opposed to just read and write operations). We establish an analogue to the classical serialisability theorem and use it to derive simple proofs of correctness of two concurrency control algorithms for object bases, namely Nested TwoPhase Locking (Moss' algorithm) and Nested Timestamp Ordering (Reed's algorithm). Concurrency control in object bases can be viewed as a combination of intraobject and interobject synchronisation. The former ensures that each object's own methods are executed in serialisable fashion; the latter ensures the compatibility of trans...