Results 1  10
of
14
Tight RMR lower bounds for mutual exclusion and other problems
 In Proceedings of the 40th annual ACM symposium on Theory of computing, STOC ’08
, 2008
"... We investigate the remote memory references (RMRs) complexity of deterministic processes that communicate by reading and writing shared memory in asynchronous cachecoherent and distributed sharedmemory multiprocessors. We define a class of algorithms that we call order encoding. By applying inform ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
We investigate the remote memory references (RMRs) complexity of deterministic processes that communicate by reading and writing shared memory in asynchronous cachecoherent and distributed sharedmemory multiprocessors. We define a class of algorithms that we call order encoding. By applying informationtheoretic arguments, we prove that every order encoding algorithm, shared by n processes, has an execution that incurs Ω(nlogn) RMRs. From this we derive the same lower bound for the mutual exclusion, bounded counter and store/collect synchronization problems. The bounds we obtain for these problems are tight. It follows from the results of [10] that our lower bounds hold also for algorithms that can use comparison primitives and loadlinked/storeconditional in addition to reads and writes. Our mutual exclusion lower bound proves a longstanding conjecture of Anderson and Kim.
The Complexity of Renaming
"... We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, whe ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
(Show Context)
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetchandincrement registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Ω(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c ≥ 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetchandincrement implementations, all tight within logarithmic factors. 1
Sublogarithmic testandset against a weak adversary
 In Distributed Computing: 25th International Symposium, DISC 2011
"... Abstract. A randomized implementation is given of a testandset register with O(log log n) individual step complexity and O(n) total step complexity against an oblivious adversary. The implementation is linearizable and multishot, and shows an exponential complexity improvement over previous solut ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
Abstract. A randomized implementation is given of a testandset register with O(log log n) individual step complexity and O(n) total step complexity against an oblivious adversary. The implementation is linearizable and multishot, and shows an exponential complexity improvement over previous solutions designed to work against a strong adversary. 1
Linearizable implementations do not suffice for randomized distributed computation
 In Proceedings of the 43rd annual ACM symposium on Theory of computing, STOC ’11
, 2011
"... ar ..."
(Show Context)
Closing the complexity gap between FCFS mutual exclusion and mutual exclusion
 Distributed Computing
, 2010
"... Abstract. FirstComeFirstServed (FCFS) mutual exclusion (ME) is the problem of ensuring that processes attempting to concurrently access a shared resource do so one by one, in a fair order. In this paper, we close the complexity gap between FCFS ME and ME in the asynchronous shared memory model wh ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Abstract. FirstComeFirstServed (FCFS) mutual exclusion (ME) is the problem of ensuring that processes attempting to concurrently access a shared resource do so one by one, in a fair order. In this paper, we close the complexity gap between FCFS ME and ME in the asynchronous shared memory model where processes communicate using atomic reads and writes only, and do not fail. Our main result is the first known FCFS ME algorithm that makes O(logN) remote memory references (RMRs) per passage and uses only atomic reads and writes. Our algorithm is also adaptive to point contention. More precisely, the number of RMRs a process makes per passage in our algorithm is Θ(min(k, logN)), where k is the point contention. Our algorithm matches known RMR complexity lower bounds for the class of ME algorithms that use reads and writes only, and beats the RMR complexity of prior algorithms in this class that have the FCFS property. 1
An O(1) RMRs leader election algorithm
 In Proc. ACM PODC 2006
, 2006
"... The leader election problem is a fundamental coordination problem. We present leader election algorithms for multiprocessor systems where processes communicate by reading and writing shared memory asynchronously, and do not fail. In particular, we consider the cachecoherent (CC) and distributed shar ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
The leader election problem is a fundamental coordination problem. We present leader election algorithms for multiprocessor systems where processes communicate by reading and writing shared memory asynchronously, and do not fail. In particular, we consider the cachecoherent (CC) and distributed shared memory (DSM) models of such systems. We present leader election algorithms that perform a constant number of remote memory references (RMRs) in the worst case. Our algorithms use splitterlike objects [6, 9] in a novel way, by organizing active processes into teams that share work. As there is an Ω(log n) lower bound on the RMR complexity of mutual exclusion for n processes using reads and writes only [10], our result separates the mutual exclusion and leader election problems in terms of RMR complexity in both the CC and DSM models. Our result also implies that any algorithm using reads, writes and onetime testandset objects can be simulated by an algorithm using reads and writes with only a constant blowup of the RMR complexity; proving this is easy in the CC model, but presents subtle challenges in
Randomized mutual exclusion with sublogarithmic RMRcomplexity
"... Abstract Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. Attiya, Hendler, and Woelfel (40th STOC, 2008) established an �(log N) lower bound on the number ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. Attiya, Hendler, and Woelfel (40th STOC, 2008) established an �(log N) lower bound on the number of RMRs incurred by processes as they enter and exit the critical section, where N is the number of processes in the system. This matches the upper bound of Yang and Anderson (Distrib. Comput. 9(1):51–60, 1995). The upper and lower bounds apply for algorithms that only use read and write operations. The lower bound of Attiya et al., however, only holds for deterministic algorithms. The question of whether randomized mutual exclusion algorithms, using reads and writes only, can achieve sublogarithmic expected RMR complexity remained open. We answer this question in the affirmative by presenting starvationfree randomized mutual exclusion algorithms for the cache coherent (CC) and the distributed shared memory (DSM) model that have sublogarithmic expected RMR complexity against the strong adversary. More specifically, each process incurs an expected number of O(log N / log log N) RMRs per passage through the entry and exit sections, while in the worst case the number of RMRs is O(log N). P. Woelfel was supported by NSERC.
Adaptive Randomized Mutual Exclusion in SubLogarithmic Expected Time ABSTRACT
"... Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. A mutual exclusion algorithm is adaptive to point contention, if its RMR complexity is a function of the m ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. A mutual exclusion algorithm is adaptive to point contention, if its RMR complexity is a function of the maximum number of processes concurrently executing their entry, critical, or exit section. In the best prior art deterministic adaptive mutual exclusion algorithm, presented by Kim and Anderson [22], a process performs O ( min(k, log N) ) RMRs as it enters and exits its critical section, where k is point contention and N is the number of processes in the system. Kim and Anderson also proved that a deterministic algorithm with o(k) RMR complexity does not exist [21]. However, they describe a randomized mutual exclusion algorithm that has O(log k) expected RMR complexity against an oblivious adversary. All these results apply for algorithms that use only atomic read and write operations. We present a randomized adaptive mutual exclusion algorithms with O(log k / log log k) expected amortized RMR complexity, even against a strong adversary, for the cachecoherent shared memory read/write model. Using techniques similar to those used in [17], our algorithm can be adapted for the distributed shared memory read/write model. This establishes that sublogarithmic adaptive mutual exclusion, using reads and writes only, is possible.
Mutual Exclusion withO(log 2 logn) Amortized Work
"... Abstract — This paper presents a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log 2 logn) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, sharedmemory model with an oblivious adversary. It guara ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log 2 logn) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, sharedmemory model with an oblivious adversary. It guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting. 1.
Randomized Mutual Exclusion in O(log N / log log N) RMRs [Extended Abstract]
"... Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. A recent proof [9] established an Ω(log N) lower bound on the number of RMRs incurred by processes as they ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. A recent proof [9] established an Ω(log N) lower bound on the number of RMRs incurred by processes as they enter and exit the critical section, matching an upper bound by Yang and Anderson [18]. Both these bounds apply for algorithms that only use read and write operations. The lower bound of [9] only holds for deterministic algorithms, however; the question of whether randomized mutual exclusion algorithms, using reads and writes only, can achieve sublogarithmic expected RMR complexity remained open. This paper answers this question in the affirmative. We present two strongadversary [8] randomized localspin mutual exclusion algorithms. In both algorithms, processes incur O(log N / log log N) expected RMRs per passage in every execution. Our first algorithm has suboptimal worstcase RMR complexity of O ( (log N / log log N) 2). Our second algorithm is a variant of the first that can be combined with a deterministic algorithm, such as [18], to obtain O(log N) worstcase RMR complexity. The combined algorithm thus achieves sublogarithmic expected RMR complexity while maintaining optimal worstcase RMR complexity. Our upper bounds apply for both the cache coherent (CC) and the distributed shared memory (DSM) models.