Results 1  10
of
17
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
The Complexity of Renaming
"... We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, whe ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetchandincrement registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Ω(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c ≥ 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetchandincrement implementations, all tight within logarithmic factors. 1
Lower Bounds in Distributed Computing
, 2000
"... This paper discusses results that say what cannot be computed in certain environments or when insucient resources are available. A comprehensive survey would require an entire book. As in Nancy Lynch's excellent 1989 paper, \A Hundred Impossibility Proofs for Distributed Computing" [86], we shall re ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
This paper discusses results that say what cannot be computed in certain environments or when insucient resources are available. A comprehensive survey would require an entire book. As in Nancy Lynch's excellent 1989 paper, \A Hundred Impossibility Proofs for Distributed Computing" [86], we shall restrict ourselves to some of the results we like best or think are most important. Our aim is to give you the avour of the results and some of the techniques that have been used. We shall also mention some interesting open problems and provide an extensive list of references. The focus will be on results from the past decade.
A Polylog Time WaitFree Construction for Closed Objects
, 1998
"... A (waitfree) universal construction is attractive because, no matter what types of waitfree objects are needed by applications, they can be implemented simply by instantiating the universal construction with the appropriate types. However, the worstcase time complexity of every existing nprocess ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
A (waitfree) universal construction is attractive because, no matter what types of waitfree objects are needed by applications, they can be implemented simply by instantiating the universal construction with the appropriate types. However, the worstcase time complexity of every existing nprocess universal construction is # n): that is, in any implementation obtained by instantiating a universal construction, in the worstcase a process performs# n) computation in order to complete a single operation on the implemented object. In fact, a lower bound of # n) has been proved for the worstcase local time complexity of any oblivious universal construction [12]. Since universal constructions with sublinear time complexity do not seem possible, it is natural to explore "semiuniversal " constructions that can e#ciently implement large classes of objects (as opposed to all objects). We present such a construction in this paper. Our construction implements a large class of objects, that ...
ABSTRACT An Ω(n log n) Lower Bound on the Cost of Mutual Exclusion
"... We prove an Ω(n log n) lower bound on the number of nonbusywaiting memory accesses by any deterministic algorithm solving n process mutual exclusion that communicates via shared registers. The cost of the algorithm is measured in the state change cost model, a variation of the cache coherent model. ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We prove an Ω(n log n) lower bound on the number of nonbusywaiting memory accesses by any deterministic algorithm solving n process mutual exclusion that communicates via shared registers. The cost of the algorithm is measured in the state change cost model, a variation of the cache coherent model. Our bound is tight in this model. We introduce a novel information theoretic proof technique. We first establish a lower bound on the information needed by processes to solve mutual exclusion. Then we relate the amount of information processes can acquire through shared memory accesses to the cost they incur. We believe our proof technique is flexible and intuitive, and may be applied to a variety of other problems and system models.
Operationvalency and the cost of coordination
 In Proceedings of the 22nd Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2003
"... This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return valu ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return value, the new technique allows us to derive a collection of realistic lower bounds for lockfree implementations of concurrent objects such as linearizable queues, stacks, sets, hash tables, shared counters, approximate agreement, and more. By realistic we mean that they follow the realworld model introduced by Dwork, Herlihy, and Waarts, counting both memoryreferences and memorystalls due to contention, and that they allow the combined use of read, write, and readmodifywrite operations available on current machines. By using the operationvalency technique, we derive an f~(X/~) noncached shared memory accesses lower bound on the worstcase time complexity of lockfree implementations of objects in Influence(n), a wide class of concurrent objects including all of those mentioned above, in which an individual operation can be influenced by all others. We also prove the existence of a fundamental relationship between the space complexity, latency, contention, and "influence level " of any lockfree object implementation. Our results are broad in that they hold for implementations combining read/write memory and any collection of readmodifywrite operations, and in that they apply even if shared memory words have unbounded size.
Time and space lower bounds for implementations using cas
 In DISC
, 2005
"... Abstract. This paper presents lower bounds on the time and spacecomplexity of implementations that use the k compareandswap (kCAS) synchronization primitives. We prove that the use of kCAS primitives cannot improve neither the time nor the spacecomplexity of implementations of widelyused con ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. This paper presents lower bounds on the time and spacecomplexity of implementations that use the k compareandswap (kCAS) synchronization primitives. We prove that the use of kCAS primitives cannot improve neither the time nor the spacecomplexity of implementations of widelyused concurrent objects, such as counter, stack, queue, and collect. Surprisingly, the use of kCAS may even increase the space complexity required by such implementations. We prove that the worstcase average number of steps performed by processes for any nprocess implementation of a counter, stack or queue object is Ω(log k+1 n), even if the implementation can use jCAS for j ≤ k. This bound holds even if a kCAS operation is allowed to read the k values of the objects it accesses and return these values to the calling process. This bound is tight. We also consider more realistic nonreading kCAS primitives. An operation of a nonreading kCAS primitive is only allowed to return a success/failure indication. For implementations of the collect object that use such primitives, we prove that the worstcase average number of steps performed by processes is Ω(log 2 n), regardless of the value of k. This implies a round complexity lower bound of Ω(log 2 n) for such implementations. As there is an O(log 2 n) round complexity implementation of collect that uses only reads and writes, these results establish that nonreading kCAS is no stronger than read and write for collect implementation round complexity. We also prove that kCAS does not improve the space complexity of implementing many objects (including counter, stack, queue, and singlewriter snapshot). An implementation has to use at least n base objects even if kCAS is allowed, and if all operations (other than read) swap exactly k base objects, then the space complexity must be at least k · n. 1
On the Inherent Weakness of Conditional Synchronization Primitives
 In Proceedings of the 23rd Annual ACM Symposium on Principles of Distributed Computing
, 2004
"... The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus c ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus can be solved using instances of the primitive and read write registers. Conditional synchronization primitives, such as compareandswap and loadlinked/storeconditional, can implement deterministic waitfree consensus for any number of processes (they have consensus number ∞), and are thus considered to be among the strongest synchronization primitives. To some extent because of that, compareandswap and loadlinked/storeconditional have became the synchronization primitives of choice, and have been implemented in hardware in many multiprocessor architectures. This paper shows that, though they are strong in the context of consensus, conditional synchronization primitives are not efficient in terms of memory space for implementing many key objects. Our results hold for starvationfree implementations of mutual exclusion, and for waitfree implementations of a large class of concurrent objects, that we call Visible(n). Roughly, Visible(n) is a class that includes all objects that support some operation that must perform a “visible”
OptimalTime Adaptive Strong Renaming, with Applications to Counting (Extended Abstract)
 PODC 2011, SAN JOSE USA
, 2011
"... We give two new randomized algorithms for strong renaming, both of which work against an adaptive adversary in asynchronous shared memory. The first uses repeated sampling over a sequence of arrays of decreasing size to assign unique names to each of n processes with step complexity O(log³ n). The s ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We give two new randomized algorithms for strong renaming, both of which work against an adaptive adversary in asynchronous shared memory. The first uses repeated sampling over a sequence of arrays of decreasing size to assign unique names to each of n processes with step complexity O(log³ n). The second transforms any sorting network into a strong adaptive renaming protocol, with an expected cost equal to the depth of the sorting network. Using an AKS sorting network, this gives a strong adaptive renaming algorithm with step complexity O(log k), where k is the contention in the current execution. We show this to be optimal based on a classic lower bound of Jayanti. We also show that any such strong renaming protocol can be used to build a monotoneconsistent counter with logarithmic step complexity (at the cost of adding a max register) or a linearizable fetchandincrement register (at the cost of increasing the step complexity by a logarithmic factor).
SoloValency and the Cost of Coordination
, 2007
"... This paper introduces solovalency, a variation on the valency proof technique originated by Fischer, Lynch, and Paterson. The new technique focuses on critical events that influence the responses of solo runs by individual operations, rather than on critical events that influence a protocol’s singl ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper introduces solovalency, a variation on the valency proof technique originated by Fischer, Lynch, and Paterson. The new technique focuses on critical events that influence the responses of solo runs by individual operations, rather than on critical events that influence a protocol’s single decision value. It allows us to derive √ n lower bounds on the time to perform an operation for lockfree implementations of concurrent objects such as linearizable queues, stacks, sets, hash tables, counters, approximate agreement, and more. Time is measured as the number of distinct base objects accessed and the number of stalls caused by contention in accessing memory, incurred by a process as it performs a single operation. We introduce the influence level metric that quantifies the extent to which the response of a solo execution of one process can be changed by other processes. We then prove the existence of a relationship between the space complexity, latency, contention and influence level of all lockfree object implementations. Our results are broad in that they hold for implementations that may use any collection of readmodifywrite operations in addition to read and write, and in that they apply even if base objects have unbounded size. 1