Results 1  10
of
12
Sharedmemory mutual exclusion: Major research trends since
 Distributed Computing
, 1986
"... * Exclusion: At most one process executes its critical section at any time. ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
(Show Context)
* Exclusion: At most one process executes its critical section at any time.
An Improved Lower Bound for the Time Complexity of Mutual Exclusion (Extended Abstract)
 IN PROCEEDINGS OF THE 20TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING
, 2001
"... We establish a lower bound of 23 N= log log N) remote memory references for Nprocess mutual exclusion algorithms based on reads, writes, or comparison primitives such as testandset and compareand swap. Our bound improves an earlier lower bound of 32 log N= log log log N) established by Cyph ..."
Abstract

Cited by 40 (11 self)
 Add to MetaCart
(Show Context)
We establish a lower bound of 23 N= log log N) remote memory references for Nprocess mutual exclusion algorithms based on reads, writes, or comparison primitives such as testandset and compareand swap. Our bound improves an earlier lower bound of 32 log N= log log log N) established by Cypher. Our lower bound is of importance for two reasons. First, it almost matches the (log N) time complexity of the bestknown algorithms based on reads, writes, or comparison primitives. Second, our lower bound suggests that it is likely that, from an asymptotic standpoint, comparison primitives are no better than reads and writes when implementing localspin mutual exclusion algorithms. Thus, comparison primitives may not be the best choice to provide in hardware if one is interested in scalable synchronization.
A time complexity bound for adaptive mutual exclusion
 In Proceedings of the 15th International Symposium on Distributed Computing
, 2001
"... ..."
Using LocalSpin kExclusion Algorithms to Improve WaitFree Object Implementations
, 1997
"... We present the first sharedmemory algorithms for kexclusion in which all process blocking is achieved through the use of "localspin" busy waiting. Such algorithms are designed to reduce interconnect traffic, which is important for good performance. Our kexclusion algorithms are starvat ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
We present the first sharedmemory algorithms for kexclusion in which all process blocking is achieved through the use of "localspin" busy waiting. Such algorithms are designed to reduce interconnect traffic, which is important for good performance. Our kexclusion algorithms are starvationfree, and are designed to be fast in the absence of contention, and to exhibit scalable performance as contention rises. In contrast, all previous starvationfree kexclusion algorithms require unrealistic operations or generate excessive interconnect traffic under contention. We also show that efficient, starvationfree kexclusion algorithms can be used to reduce the time and space overhead associated with existing waitfree shared object implementations, while still providing some resilience to delays and failures. The resulting "hybrid" object implementations combine the advantages of localspin spin locks, which perform well in the absence of process delays (caused, for example, by preemptio...
Nonatomic Mutual Exclusion with Local Spinning (Extended Abstract)
, 2002
"... We present an Nprocess localspin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs \Theta (log N) remote memory references to enter and exit its critical section. This algorithm is derived from Yang and Anderson's atomic treebased localspin al ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We present an Nprocess localspin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs \Theta (log N) remote memory references to enter and exit its critical section. This algorithm is derived from Yang and Anderson's atomic treebased localspin algorithm in a way that preserves its time complexity. No atomic read/write algorithm with better asymptotic worstcase time complexity (under the remotememoryreferences measure) is currently known. This suggests that atomic memory is not fundamentally required if one is interested in worstcase time complexity. The same cannot be said if one is interested in fastpath algorithms (in which contentionfree time complexity is required to be O(1)) or adaptive algorithms (in which time complexity is required to be proportional to the number of contending processes). We show that such algorithms fundamentally require memory accesses to be atomic. In particular, we show that for any Nprocess nonatomic algorithm, there exists a singleprocess execution in which the lone competing process executes \Omega (log N / log log N) remote operations to enter its critical section. Moreover, these operations must access \Omega (plog N / log log N) distinct variables, which implies that fast and adaptive algorithms are impossible even if caching techniques are used to avoid accessing the processorstomemory interconnection network.
Lamport on Mutual Exclusion: 27 Years of Planting Seeds
 In 20th ACM Symposium on Principles of Distributed Computing
, 2001
"... Mutual exclusion is a topic that Leslie Lamport has returned to many times throughout his career. This article, which is being written in celebration of Lamport's sixtieth birthday, is an attempt to survey some of his many contributions to research on this topic. ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Mutual exclusion is a topic that Leslie Lamport has returned to many times throughout his career. This article, which is being written in celebration of Lamport's sixtieth birthday, is an attempt to survey some of his many contributions to research on this topic.
A Tight Time Lower Bound for SpaceOptimal Implementations of MultiWriter Snapshots
 In Proceedings of the 35th ACM Symposium on Theory of Computing
, 2003
"... A snapshot object consists of a collection of m > 1 components, each capable of storing a value, shared by n processes in an asynchronous sharedmemory distributed system. It supports two operations: a process can UPDATE any individual component or atomically SCAN the entire collection to obtain ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
A snapshot object consists of a collection of m > 1 components, each capable of storing a value, shared by n processes in an asynchronous sharedmemory distributed system. It supports two operations: a process can UPDATE any individual component or atomically SCAN the entire collection to obtain the values of all the components. It is possible to implement a snapshot object using m registers so that each operation takes O(mn) time.
Localspin Mutual Exclusion Using Fetchandφ Primitives
 In Proceedings of the 23rd IEEE International Conference on Distributed Computing Systems
, 2003
"... We present a generic fetchandφbased localspin mutual exclusion algorithm, with O(1) time complexity under the remotememoryreferences time measure. This algorithm is "generic" in the sense that it can be implemented using any fetchandφ primitive of rank 2N , where N ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We present a generic fetchand&phi;based localspin mutual exclusion algorithm, with O(1) time complexity under the remotememoryreferences time measure. This algorithm is "generic" in the sense that it can be implemented using any fetchand&phi; primitive of rank 2N , where N is the number of processes. The rank of a fetchand&phi; primitive is a notion introduced herein; informally, it expresses the extent to which processes may "order themselves" using that primitive. This algorithm breaks new ground because it shows that O(1) time complexity is possible using a wide range of primitives. In addition, previously published fetchand&phi;based algorithms either use multiple primitives or require an underlying cachecoherence mechanism for spins to be local, while ours does not. By applying our generic algorithm within an arbitration tree, one can easily construct a &Theta;(log_r N) algorithm using any primitive of rank r, where 2 &le; r < N . For primitives that meet a certain additional condition, we present a &Theta;(log N/log log N) algorithm, which gives an asymptotic improvement in time complexity for primitives of rank o(log N). It follows from a previouslypresented lower bound proof that this algorithm is asymptotically timeoptimal for certain primitives of constant rank.
Using Delays to Improve RMR Time Complexity in Mutual Exclusion Algorithms (Extended Abstract)
"... We consider the time complexity of sharedmemory mutual exclusion algorithms under the remotememoryreference (RMR) time measure. In particular, algorithms based on reads, writes, and comparison primitives are considered. For such algorithms, a lower bound of \Omega(log N/ log log N) RMRs per criti ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the time complexity of sharedmemory mutual exclusion algorithms under the remotememoryreference (RMR) time measure. In particular, algorithms based on reads, writes, and comparison primitives are considered. For such algorithms, a lower bound of \Omega(log N/ log log N) RMRs per criticalsection entry has been established in previous work, where N is the number of processes. Also, algorithms with O(log N) time complexity are known. Thus, for algorithms in this class, logarithmic or nearlogarithmic RMR time complexity is fundamentally required. In this paper, we consider...
Time Complexity Bounds for Sharedmemory Mutual Exclusion
, 2001
"... The primary goal of my work is to close the gap between lower and upper bounds on the time complexity of the mutual exclusion problem in sharedmemory multiprocessor systems. Mutual exclusion algorithms are used to resolve conicting accesses to shared resources by asynchronous, concurrent process ..."
Abstract
 Add to MetaCart
(Show Context)
The primary goal of my work is to close the gap between lower and upper bounds on the time complexity of the mutual exclusion problem in sharedmemory multiprocessor systems. Mutual exclusion algorithms are used to resolve conicting accesses to shared resources by asynchronous, concurrent processes. The problem of designing such an algorithm is widely regarded as the preeminent \classic" problem in concurrent programming. In this proposal, the time complexity of a mutual exclusion algorithm is dened as the number of remote memory references generated by a process to enter and exit its critical section. Under this measure, constanttime algorithms are known that use primitives such as fetchandadd and fetchandstore. However, it has been shown that no such constanttime algorithm is possible that uses reads, writes, and comparison primitives. My dissertation aims to provide optimal time bounds for algorithms based on such primitives. 1