Results 1  10
of
34
Sharedmemory mutual exclusion: Major research trends since
 Distributed Computing
, 1986
"... * Exclusion: At most one process executes its critical section at any time. ..."
Abstract

Cited by 47 (7 self)
 Add to MetaCart
* Exclusion: At most one process executes its critical section at any time.
Adaptive mutual exclusion with local spinning
 In Proceedings of the 14th International Symposium on Distributed Computing
, 2000
"... Abstract We present an adaptive algorithm for Nprocess mutual exclusion under read/write atomicity in which all busy waiting is by local spinning. In our algorithm, each process p performs O(k) remote memory references to enter and exit its critical section, where k is the maximum "point conte ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
Abstract We present an adaptive algorithm for Nprocess mutual exclusion under read/write atomicity in which all busy waiting is by local spinning. In our algorithm, each process p performs O(k) remote memory references to enter and exit its critical section, where k is the maximum "point contention " experienced by p. The space complexity of our algorithm is \Theta (N), which is clearly optimal. Our algorithm is the first mutual exclusion algorithm under read/write atomicity that is adaptive when time complexity is measured by counting remote memory references. All previous socalled adaptive mutual exclusion algorithms employ busywaiting loops that can generate an unbounded number of remote memory references. Thus, they have unbounded time complexity under this measure.
A Simple LocalSpin Group Mutual Exclusion Algorithm
 IN PROCEEDINGS OF THE 18TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING
, 1999
"... This paper presents a new solution to the group mutual exclusion problem, recently posed by Joung. In this problem, processes repeatedly request access to various "sessions". It is required that distinct processes are not in different sessions concurrently, that multiple processes may be in the sa ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
This paper presents a new solution to the group mutual exclusion problem, recently posed by Joung. In this problem, processes repeatedly request access to various "sessions". It is required that distinct processes are not in different sessions concurrently, that multiple processes may be in the same session concurrently, and that each process that tries to enter a session is eventually able to do so. This problem is a generalization of the mutual exclusion and readerswriters problems. Our algorithm and its correctness proof are substantially simpler than Joung's. This simplicity is achieved by building upon known solutions to the more specific mutual exclusion problem. Our algorithm also has various advantages over Joung's, depending on the choice of mutual exclusion algorithm used. These advantages include admitting a process to its session in constant time in the absence of contention, spinning locally in Cache Coherent (CC) and NonUniform Memory Access (NUMA) systems, an...
A time complexity bound for adaptive mutual exclusion
 In Proceedings of the 15th International Symposium on Distributed Computing
, 2001
"... ..."
Using LocalSpin kExclusion Algorithms to Improve WaitFree Object Implementations
, 1997
"... We present the first sharedmemory algorithms for kexclusion in which all process blocking is achieved through the use of "localspin" busy waiting. Such algorithms are designed to reduce interconnect traffic, which is important for good performance. Our kexclusion algorithms are starvationfree, ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
We present the first sharedmemory algorithms for kexclusion in which all process blocking is achieved through the use of "localspin" busy waiting. Such algorithms are designed to reduce interconnect traffic, which is important for good performance. Our kexclusion algorithms are starvationfree, and are designed to be fast in the absence of contention, and to exhibit scalable performance as contention rises. In contrast, all previous starvationfree kexclusion algorithms require unrealistic operations or generate excessive interconnect traffic under contention. We also show that efficient, starvationfree kexclusion algorithms can be used to reduce the time and space overhead associated with existing waitfree shared object implementations, while still providing some resilience to delays and failures. The resulting "hybrid" object implementations combine the advantages of localspin spin locks, which perform well in the absence of process delays (caused, for example, by preemptio...
Nonatomic Mutual Exclusion with Local Spinning (Extended Abstract)
, 2002
"... We present an Nprocess localspin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs \Theta (log N) remote memory references to enter and exit its critical section. This algorithm is derived from Yang and Anderson's atomic treebased localspin algorit ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We present an Nprocess localspin mutual exclusion algorithm, based on nonatomic reads and writes, in which each process performs \Theta (log N) remote memory references to enter and exit its critical section. This algorithm is derived from Yang and Anderson's atomic treebased localspin algorithm in a way that preserves its time complexity. No atomic read/write algorithm with better asymptotic worstcase time complexity (under the remotememoryreferences measure) is currently known. This suggests that atomic memory is not fundamentally required if one is interested in worstcase time complexity. The same cannot be said if one is interested in fastpath algorithms (in which contentionfree time complexity is required to be O(1)) or adaptive algorithms (in which time complexity is required to be proportional to the number of contending processes). We show that such algorithms fundamentally require memory accesses to be atomic. In particular, we show that for any Nprocess nonatomic algorithm, there exists a singleprocess execution in which the lone competing process executes \Omega (log N / log log N) remote operations to enter its critical section. Moreover, these operations must access \Omega (plog N / log log N) distinct variables, which implies that fast and adaptive algorithms are impossible even if caching techniques are used to avoid accessing the processorstomemory interconnection network.
A fair distributed mutual exclusion algorithm
 IEEE Transactions on Parallel and Distributed Systems
, 2000
"... AbstractÐThis paper presents a fair decentralized mutual exclusion algorithm for distributed systems in which processes communicate by asynchronous message passing. The algorithm requires betweenN 1 and 2…N 1 † messages per critical section access, where N is the number of processes in the system. T ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
AbstractÐThis paper presents a fair decentralized mutual exclusion algorithm for distributed systems in which processes communicate by asynchronous message passing. The algorithm requires betweenN 1 and 2…N 1 † messages per critical section access, where N is the number of processes in the system. The exact message complexity can be expressed as a deterministic function of concurrency in the computation. The algorithm does not introduce any other overheads over Lamport's and RicartAgrawala's algorithms, which require 3…N 1 † and 2…N 1 † messages, respectively, per critical section access and are the only other decentralized algorithms that allow mutual exclusion access in the order of the timestamps of requests. Index TermsÐAlgorithm, concurrency, distributed system, fairness, mutual exclusion, synchronization. 1
Tight RMR lower bounds for mutual exclusion and other problems
 In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC ’08
, 2008
"... We investigate the remote memory references (RMRs) complexity of deterministic processes that communicate by reading and writing shared memory in asynchronous cachecoherent and distributed sharedmemory multiprocessors. We define a class of algorithms that we call order encoding. By applying inform ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We investigate the remote memory references (RMRs) complexity of deterministic processes that communicate by reading and writing shared memory in asynchronous cachecoherent and distributed sharedmemory multiprocessors. We define a class of algorithms that we call order encoding. By applying informationtheoretic arguments, we prove that every order encoding algorithm, shared by n processes, has an execution that incurs Ω(nlogn) RMRs. From this we derive the same lower bound for the mutual exclusion, bounded counter and store/collect synchronization problems. The bounds we obtain for these problems are tight. It follows from the results of [10] that our lower bounds hold also for algorithms that can use comparison primitives and loadlinked/storeconditional in addition to reads and writes. Our mutual exclusion lower bound proves a longstanding conjecture of Anderson and Kim.
ConstantRMR Implementations of CAS and Other Synchronization Primitives Using Read and Write Operations (Extended Abstract)
 PODC'07
, 2007
"... We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and loadlinked/storeconditional using only a constant number of remote memory references (RMRs), in bo ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and loadlinked/storeconditional using only a constant number of remote memory references (RMRs), in both the cachecoherent and the distributedsharedmemory models of such multiprocessors. Our implementations are blocking, rather than waitfree: they ensure progress provided all processes that invoke the implemented primitive are live. Our results imply that any algorithm using read and write operations, comparison primitives, and loadlinked/storeconditional, can be simulated by an algorithm that uses read and write operations only, with at most a constant blowup in RMR complexity.
The Complexity of Renaming
"... We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, whe ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetchandincrement registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Ω(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c ≥ 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetchandincrement implementations, all tight within logarithmic factors. 1