Results 1 
7 of
7
The Communication Requirements of Mutual Exclusion
 In Proceedings of the Seventh Annual Symposium on Parallel Algorithms and Architectures
, 1995
"... This paper examines the amount of communication that is required for performing mutual exclusion. It is assumed that n processors communicate via accesses to a shared memory that is physically distributed among the processors. We consider the possibility of creating a scalable mutual exclusion proto ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
This paper examines the amount of communication that is required for performing mutual exclusion. It is assumed that n processors communicate via accesses to a shared memory that is physically distributed among the processors. We consider the possibility of creating a scalable mutual exclusion protocol that requires only a constant amount of communication per access to a critical section. We present two main results. First, we show that there does not exist a scalable mutual exclusion protocol that uses only read and write operations. This result solves an open problem posed by Yang and Anderson. Second, we prove that the same result holds even if testandset, compareandswap, loadandreserve and storeconditional operations are allowed in addition to read and write operations. Our results hold even if an amortized analysis of communication costs is used, an arbitrary amount of memory is available, and the processors have coherent caches. In contrast, a mutual exclusion protocol is ...
The wakeup problem
 SIAM Journal on Computing
, 1996
"... We study a new problem, the wakeup problem, that seems to be fundamental in distributed computing. We present efficient solutions to the problem and show how these solutions can be used to solve the consensus problem, the leader election problem, and other related problems. The main question we try ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
We study a new problem, the wakeup problem, that seems to be fundamental in distributed computing. We present efficient solutions to the problem and show how these solutions can be used to solve the consensus problem, the leader election problem, and other related problems. The main question we try to answer is, how much memory is needed to solve the wakeup problem? We assume a model that captures important properties of real systems that have been largely ignored by previous work on cooperative problems.
Time/Contention Tradeoffs for Multiprocessor Synchronization
 Information and Computation
, 1996
"... We establish tradeoffs between time complexity and write and accesscontention for solutions to the mutual exclusion problem. The writecontention (accesscontention) of a concurrent program is the number of processes that may be simultaneously enabled to write (access by reading and/or writing) t ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
We establish tradeoffs between time complexity and write and accesscontention for solutions to the mutual exclusion problem. The writecontention (accesscontention) of a concurrent program is the number of processes that may be simultaneously enabled to write (access by reading and/or writing) the same shared variable. Our notion of time complexity distinguishes between local and remote accesses of shared memory. We show that, for any Nprocess mutual exclusion algorithm, if writecontention is w, and if at most v remote variables can be accessed by a single atomic operation, then there exists an execution involving only one process in which that process executes\Omega\Gammaecu vw N) remote operations for entry into its critical section. We further show that, among these operations,\Omega\Gamma p log vw N) distinct remote variables are accessed. For algorithms with accesscontention c, we show that the latter bound can be improved to \Omega\Gamma/51 vc N ). The last two of thes...
Linear lower bounds on realworld implementations of concurrent objects
 In Proceedings of the 46th Annual Symposium on Foundations of Computer Science (FOCS
, 2005
"... Abstract This paper proves \Omega (n) lower bounds on the time to perform a single instance of an operationin any implementation of a large class of data structures shared by n processes. For standarddata structures such as counters, stacks, and queues, the bound is tight. The implementations consid ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
Abstract This paper proves \Omega (n) lower bounds on the time to perform a single instance of an operationin any implementation of a large class of data structures shared by n processes. For standarddata structures such as counters, stacks, and queues, the bound is tight. The implementations considered may apply any deterministic primitives to a base object. No bounds are assumedon either the number of base objects or their size. Time is measured as the number of steps a process performs on base objects and the number of stalls it incurs as a result of contentionwith other processes. 1
Knowledge, Timed Precedence and Clocks
 In Proceedings of the 13th Annual ACM Symposium on Principles of Distributed Computing
, 1995
"... This paper introduces a framework for knowledgebased analysis of issues of timing and clocks in systems with realtime constraints. We define the notion of timed precedence, a generalization of Lamport's potential causality that is suitable for reasoning about timing in realtime systems. Knowledge ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper introduces a framework for knowledgebased analysis of issues of timing and clocks in systems with realtime constraints. We define the notion of timed precedence, a generalization of Lamport's potential causality that is suitable for reasoning about timing in realtime systems. Knowledge about timed precedence is the key element in some of the work on optimal clock synchronization. We argue that the state of distributed knowledge, the natural candidate for use in such an analysis, is not appropriate for capturing various aspects of timing in realtime systems. We define an alternative notion, called inherent knowledge, which we find more appropriate. Finally, we illustrate how knowledge about the timed precedence of events can allow highlevel reasoning about issues such as clock synchronization. 1 Introduction Clocks play a limited role in truly asynchronous systems, in which we do not make any assumptions about the time it takes messages to be delivered, the rates at whic...
Operationvalency and the cost of coordination
 In Proceedings of the 22nd Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2003
"... This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return valu ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return value, the new technique allows us to derive a collection of realistic lower bounds for lockfree implementations of concurrent objects such as linearizable queues, stacks, sets, hash tables, shared counters, approximate agreement, and more. By realistic we mean that they follow the realworld model introduced by Dwork, Herlihy, and Waarts, counting both memoryreferences and memorystalls due to contention, and that they allow the combined use of read, write, and readmodifywrite operations available on current machines. By using the operationvalency technique, we derive an f~(X/~) noncached shared memory accesses lower bound on the worstcase time complexity of lockfree implementations of objects in Influence(n), a wide class of concurrent objects including all of those mentioned above, in which an individual operation can be influenced by all others. We also prove the existence of a fundamental relationship between the space complexity, latency, contention, and "influence level " of any lockfree object implementation. Our results are broad in that they hold for implementations combining read/write memory and any collection of readmodifywrite operations, and in that they apply even if shared memory words have unbounded size.
On the Inherent Weakness of Conditional Synchronization Primitives
 In Proceedings of the 23rd Annual ACM Symposium on Principles of Distributed Computing
, 2004
"... The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus c ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus can be solved using instances of the primitive and read write registers. Conditional synchronization primitives, such as compareandswap and loadlinked/storeconditional, can implement deterministic waitfree consensus for any number of processes (they have consensus number ∞), and are thus considered to be among the strongest synchronization primitives. To some extent because of that, compareandswap and loadlinked/storeconditional have became the synchronization primitives of choice, and have been implemented in hardware in many multiprocessor architectures. This paper shows that, though they are strong in the context of consensus, conditional synchronization primitives are not efficient in terms of memory space for implementing many key objects. Our results hold for starvationfree implementations of mutual exclusion, and for waitfree implementations of a large class of concurrent objects, that we call Visible(n). Roughly, Visible(n) is a class that includes all objects that support some operation that must perform a “visible”