Results 1 
9 of
9
Sharedmemory mutual exclusion: Major research trends since
 Distributed Computing
, 1986
"... * Exclusion: At most one process executes its critical section at any time. ..."
Abstract

Cited by 47 (7 self)
 Add to MetaCart
* Exclusion: At most one process executes its critical section at any time.
An Improved Lower Bound for the Time Complexity of Mutual Exclusion (Extended Abstract)
 IN PROCEEDINGS OF THE 20TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING
, 2001
"... We establish a lower bound of 23 N= log log N) remote memory references for Nprocess mutual exclusion algorithms based on reads, writes, or comparison primitives such as testandset and compareand swap. Our bound improves an earlier lower bound of 32 log N= log log log N) established by Cyph ..."
Abstract

Cited by 41 (12 self)
 Add to MetaCart
We establish a lower bound of 23 N= log log N) remote memory references for Nprocess mutual exclusion algorithms based on reads, writes, or comparison primitives such as testandset and compareand swap. Our bound improves an earlier lower bound of 32 log N= log log log N) established by Cypher. Our lower bound is of importance for two reasons. First, it almost matches the (log N) time complexity of the bestknown algorithms based on reads, writes, or comparison primitives. Second, our lower bound suggests that it is likely that, from an asymptotic standpoint, comparison primitives are no better than reads and writes when implementing localspin mutual exclusion algorithms. Thus, comparison primitives may not be the best choice to provide in hardware if one is interested in scalable synchronization.
A time complexity bound for adaptive mutual exclusion
 In Proceedings of the 15th International Symposium on Distributed Computing
, 2001
"... ..."
Using LocalSpin kExclusion Algorithms to Improve WaitFree Object Implementations
, 1997
"... We present the first sharedmemory algorithms for kexclusion in which all process blocking is achieved through the use of "localspin" busy waiting. Such algorithms are designed to reduce interconnect traffic, which is important for good performance. Our kexclusion algorithms are starvationfree, ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
We present the first sharedmemory algorithms for kexclusion in which all process blocking is achieved through the use of "localspin" busy waiting. Such algorithms are designed to reduce interconnect traffic, which is important for good performance. Our kexclusion algorithms are starvationfree, and are designed to be fast in the absence of contention, and to exhibit scalable performance as contention rises. In contrast, all previous starvationfree kexclusion algorithms require unrealistic operations or generate excessive interconnect traffic under contention. We also show that efficient, starvationfree kexclusion algorithms can be used to reduce the time and space overhead associated with existing waitfree shared object implementations, while still providing some resilience to delays and failures. The resulting "hybrid" object implementations combine the advantages of localspin spin locks, which perform well in the absence of process delays (caused, for example, by preemptio...
Nonatomic Mutual Exclusion with Local Spinning (Extended Abstract)
, 2002
"... We present an Nprocess localspin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs \Theta (log N) remote memory references to enter and exit its critical section. This algorithm is derived from Yang and Anderson's atomic treebased localspin algorit ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We present an Nprocess localspin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs \Theta (log N) remote memory references to enter and exit its critical section. This algorithm is derived from Yang and Anderson's atomic treebased localspin algorithm in a way that preserves its time complexity. No atomic read/write algorithm with better asymptotic worstcase time complexity (under the remotememoryreferences measure) is currently known. This suggests that atomic memory is not fundamentally required if one is interested in worstcase time complexity. The same cannot be said if one is interested in fastpath algorithms (in which contentionfree time complexity is required to be O(1)) or adaptive algorithms (in which time complexity is required to be proportional to the number of contending processes). We show that such algorithms fundamentally require memory accesses to be atomic. In particular, we show that for any Nprocess nonatomic algorithm, there exists a singleprocess execution in which the lone competing process executes \Omega (log N / log log N) remote operations to enter its critical section. Moreover, these operations must access \Omega (plog N / log log N) distinct variables, which implies that fast and adaptive algorithms are impossible even if caching techniques are used to avoid accessing the processorstomemory interconnection network.
Lamport on Mutual Exclusion: 27 Years of Planting Seeds
 In 20th ACM Symposium on Principles of Distributed Computing
, 2001
"... Mutual exclusion is a topic that Leslie Lamport has returned to many times throughout his career. This article, which is being written in celebration of Lamport's sixtieth birthday, is an attempt to survey some of his many contributions to research on this topic. ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Mutual exclusion is a topic that Leslie Lamport has returned to many times throughout his career. This article, which is being written in celebration of Lamport's sixtieth birthday, is an attempt to survey some of his many contributions to research on this topic.
A Tight Time Lower Bound for SpaceOptimal Implementations of MultiWriter Snapshots
 In Proceedings of the 35th ACM Symposium on Theory of Computing
, 2003
"... A snapshot object consists of a collection of m > 1 components, each capable of storing a value, shared by n processes in an asynchronous sharedmemory distributed system. It supports two operations: a process can UPDATE any individual component or atomically SCAN the entire collection to obtain the ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
A snapshot object consists of a collection of m > 1 components, each capable of storing a value, shared by n processes in an asynchronous sharedmemory distributed system. It supports two operations: a process can UPDATE any individual component or atomically SCAN the entire collection to obtain the values of all the components. It is possible to implement a snapshot object using m registers so that each operation takes O(mn) time.
Localspin Mutual Exclusion Using Fetchandφ Primitives
 In Proceedings of the 23rd IEEE International Conference on Distributed Computing Systems
, 2003
"... We present a generic fetchandφbased localspin mutual exclusion algorithm, with O(1) time complexity under the remotememoryreferences time measure. This algorithm is "generic" in the sense that it can be implemented using any fetchandφ primitive of rank 2N , where N is the number of p ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We present a generic fetchandφbased localspin mutual exclusion algorithm, with O(1) time complexity under the remotememoryreferences time measure. This algorithm is "generic" in the sense that it can be implemented using any fetchandφ primitive of rank 2N , where N is the number of processes. The rank of a fetchandφ primitive is a notion introduced herein; informally, it expresses the extent to which processes may "order themselves" using that primitive. This algorithm breaks new ground because it shows that O(1) time complexity is possible using a wide range of primitives. In addition, previously published fetchandφbased algorithms either use multiple primitives or require an underlying cachecoherence mechanism for spins to be local, while ours does not. By applying our generic algorithm within an arbitration tree, one can easily construct a Θ(log_r N) algorithm using any primitive of rank r, where 2 ≤ r < N . For primitives that meet a certain additional condition, we present a Θ(log N/log log N) algorithm, which gives an asymptotic improvement in time complexity for primitives of rank o(log N). It follows from a previouslypresented lower bound proof that this algorithm is asymptotically timeoptimal for certain primitives of constant rank.
Appendix A
"... 60dB 4.20 512 96 ETSIA ETSIA ETSI1 60dB 4.20 1536 512 AWGN140 AWGN140 Draft Recommendation G.992.2 140 14 T1.601 #9 1536kbps 256kbps 49 Annex A G.992.2 15 T1.601 #9 1536kbps 256kbps 24 DSL 16 Shortened T1.601#7 1536kbps 256kbps 24 HDSL Table 47. Extended Reach Test Cases NOTE1: A goal of futu ..."
Abstract
 Add to MetaCart
60dB 4.20 512 96 ETSIA ETSIA ETSI1 60dB 4.20 1536 512 AWGN140 AWGN140 Draft Recommendation G.992.2 140 14 T1.601 #9 1536kbps 256kbps 49 Annex A G.992.2 15 T1.601 #9 1536kbps 256kbps 24 DSL 16 Shortened T1.601#7 1536kbps 256kbps 24 HDSL Table 47. Extended Reach Test Cases NOTE1: A goal of future enhancements of this Recommendation is to make the "Extended Reach Cases" mandatory. NOTE2: Performance levels do not reflect the effect of customer premise wiring, which is expected to reduce data rate.G.992.2G.992.2G.992.2 Draft Recommendation G.992.2 139 ANNEX D D.1 System Performance for North America All test loops specified in this section shall be used for G.992.2 and testing shall confirm to the following: . No power cutback on upstream transmitter. . Margin=4 dB . BER=10 7 . Background noise = 140 dBm/Hz . Rates, except where noted,