Results 1  10
of
26
Sharedmemory mutual exclusion: Major research trends since
 Distributed Computing
, 1986
"... * Exclusion: At most one process executes its critical section at any time. ..."
Abstract

Cited by 47 (7 self)
 Add to MetaCart
* Exclusion: At most one process executes its critical section at any time.
An Improved Lower Bound for the Time Complexity of Mutual Exclusion (Extended Abstract)
 IN PROCEEDINGS OF THE 20TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING
, 2001
"... We establish a lower bound of 23 N= log log N) remote memory references for Nprocess mutual exclusion algorithms based on reads, writes, or comparison primitives such as testandset and compareand swap. Our bound improves an earlier lower bound of 32 log N= log log log N) established by Cyph ..."
Abstract

Cited by 41 (12 self)
 Add to MetaCart
We establish a lower bound of 23 N= log log N) remote memory references for Nprocess mutual exclusion algorithms based on reads, writes, or comparison primitives such as testandset and compareand swap. Our bound improves an earlier lower bound of 32 log N= log log log N) established by Cypher. Our lower bound is of importance for two reasons. First, it almost matches the (log N) time complexity of the bestknown algorithms based on reads, writes, or comparison primitives. Second, our lower bound suggests that it is likely that, from an asymptotic standpoint, comparison primitives are no better than reads and writes when implementing localspin mutual exclusion algorithms. Thus, comparison primitives may not be the best choice to provide in hardware if one is interested in scalable synchronization.
A Time Complexity Lower Bound for Randomized Implementations of Some Shared Objects
 In Symposium on Principles of Distributed Computing
, 1998
"... Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect th ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect that all n processes are up, cannot be solved unless some process performs#for n) sharedmemory operations. Using this basic result, we derive a#230 n) lower bound on the worstcase sharedaccess time complexity of nprocess implementations of several types of objects, including fetch&increment, fetch&multiply, fetch&and, queue, and stack. (The worstcase sharedaccess time complexity of an implementation is the number of sharedmemory operations that a process performs, in the worstcase, in order to complete a single operation on the implementation.) Our lower bound is strong in several ways: it holds even if (1) sharedmemory has an infinite number of words, each of unbounded size, (2) sh...
Fast and Scalable Mutual Exclusion
 In Proceedings of the 13th International Symposium on Distributed Computing
, 1999
"... . We present an Nprocess algorithm for mutual exclusion under read/write atomicity that has O(1) time complexity in the absence of contention and \Theta(log N) time complexity under contention, where "time" is measured by counting remote memory references. This is the first such algorithm to ac ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
. We present an Nprocess algorithm for mutual exclusion under read/write atomicity that has O(1) time complexity in the absence of contention and \Theta(log N) time complexity under contention, where "time" is measured by counting remote memory references. This is the first such algorithm to achieve these time complexity bounds. Our algorithm is obtained by combining a new "fastpath" mechanism with an arbitrationtree algorithm presented previously by Yang and Anderson. 1 Introduction Recent work on mutual exclusion [3] has focused on the design of "scalable" algorithms that minimize the impact of the processortomemory bottleneck through the use of local spinning . A mutual exclusion algorithm is scalable if its performance degrades only slightly as the number of contending processes increases. In localspin mutual exclusion algorithms, good scalability is achieved by requiring all busywaiting loops to be readonly loops in which only locallyaccessible shared variables ar...
Linear lower bounds on realworld implementations of concurrent objects
 In Proceedings of the 46th Annual Symposium on Foundations of Computer Science (FOCS
, 2005
"... Abstract This paper proves \Omega (n) lower bounds on the time to perform a single instance of an operationin any implementation of a large class of data structures shared by n processes. For standarddata structures such as counters, stacks, and queues, the bound is tight. The implementations consid ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
Abstract This paper proves \Omega (n) lower bounds on the time to perform a single instance of an operationin any implementation of a large class of data structures shared by n processes. For standarddata structures such as counters, stacks, and queues, the bound is tight. The implementations considered may apply any deterministic primitives to a base object. No bounds are assumedon either the number of base objects or their size. Time is measured as the number of steps a process performs on base objects and the number of stalls it incurs as a result of contentionwith other processes. 1
Tight RMR lower bounds for mutual exclusion and other problems
 In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC ’08
, 2008
"... We investigate the remote memory references (RMRs) complexity of deterministic processes that communicate by reading and writing shared memory in asynchronous cachecoherent and distributed sharedmemory multiprocessors. We define a class of algorithms that we call order encoding. By applying inform ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We investigate the remote memory references (RMRs) complexity of deterministic processes that communicate by reading and writing shared memory in asynchronous cachecoherent and distributed sharedmemory multiprocessors. We define a class of algorithms that we call order encoding. By applying informationtheoretic arguments, we prove that every order encoding algorithm, shared by n processes, has an execution that incurs Ω(nlogn) RMRs. From this we derive the same lower bound for the mutual exclusion, bounded counter and store/collect synchronization problems. The bounds we obtain for these problems are tight. It follows from the results of [10] that our lower bounds hold also for algorithms that can use comparison primitives and loadlinked/storeconditional in addition to reads and writes. Our mutual exclusion lower bound proves a longstanding conjecture of Anderson and Kim.
ConstantRMR Implementations of CAS and Other Synchronization Primitives Using Read and Write Operations (Extended Abstract)
 PODC'07
, 2007
"... We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and loadlinked/storeconditional using only a constant number of remote memory references (RMRs), in bo ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and loadlinked/storeconditional using only a constant number of remote memory references (RMRs), in both the cachecoherent and the distributedsharedmemory models of such multiprocessors. Our implementations are blocking, rather than waitfree: they ensure progress provided all processes that invoke the implemented primitive are live. Our results imply that any algorithm using read and write operations, comparison primitives, and loadlinked/storeconditional, can be simulated by an algorithm that uses read and write operations only, with at most a constant blowup in RMR complexity.
Lamport on Mutual Exclusion: 27 Years of Planting Seeds
 In 20th ACM Symposium on Principles of Distributed Computing
, 2001
"... Mutual exclusion is a topic that Leslie Lamport has returned to many times throughout his career. This article, which is being written in celebration of Lamport's sixtieth birthday, is an attempt to survey some of his many contributions to research on this topic. ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Mutual exclusion is a topic that Leslie Lamport has returned to many times throughout his career. This article, which is being written in celebration of Lamport's sixtieth birthday, is an attempt to survey some of his many contributions to research on this topic.
A New FastPath Mechanism for Mutual Exclusion
 Distributed Computing
, 1999
"... In 1993, Yang and Anderson presented an Nprocess algorithm for mutual exclusion under read/write atomicity that has \Theta(log N) time complexity, where "time" is measured by counting remote memory references. In this algorithm, instances of a twoprocess mutual exclusion algorithm are embedded w ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
In 1993, Yang and Anderson presented an Nprocess algorithm for mutual exclusion under read/write atomicity that has \Theta(log N) time complexity, where "time" is measured by counting remote memory references. In this algorithm, instances of a twoprocess mutual exclusion algorithm are embedded within a binary arbitration tree. In the twoprocess algorithm that was used, all busywaiting is done by "local spinning." Performance studies presented by Yang and Anderson showed that their Nprocess algorithm exhibits scalable performance under heavy contention. One drawback of using an arbitration tree, however, is that each process is required to perform \Theta(log N) remote memory operations even when there is no contention. To remedy this problem, Yang and Anderson presented a variant of their algorithm that includes a "fastpath" mechanism that allows the arbitration tree to be bypassed in the absence of contention. This algorithm has the desirable property that contentionfre...
ABSTRACT An Ω(n log n) Lower Bound on the Cost of Mutual Exclusion
"... We prove an Ω(n log n) lower bound on the number of nonbusywaiting memory accesses by any deterministic algorithm solving n process mutual exclusion that communicates via shared registers. The cost of the algorithm is measured in the state change cost model, a variation of the cache coherent model. ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We prove an Ω(n log n) lower bound on the number of nonbusywaiting memory accesses by any deterministic algorithm solving n process mutual exclusion that communicates via shared registers. The cost of the algorithm is measured in the state change cost model, a variation of the cache coherent model. Our bound is tight in this model. We introduce a novel information theoretic proof technique. We first establish a lower bound on the information needed by processes to solve mutual exclusion. Then we relate the amount of information processes can acquire through shared memory accesses to the cost they incur. We believe our proof technique is flexible and intuitive, and may be applied to a variety of other problems and system models.