Results 1  10
of
31
Atomic Snapshots of Shared Memory
, 1993
"... . This paper introduces a general formulation of atomic snapshot memory, a shared memory partitioned into words written (updated) by individual processes, or instantaneously read (scanned) in its entirety. This paper presents three waitfree implementations of atomic snapshot memory. The first imple ..."
Abstract

Cited by 169 (44 self)
 Add to MetaCart
. This paper introduces a general formulation of atomic snapshot memory, a shared memory partitioned into words written (updated) by individual processes, or instantaneously read (scanned) in its entirety. This paper presents three waitfree implementations of atomic snapshot memory. The first implementation in this paper uses unbounded (integer) fields in these registers, and is particularly easy to understand. The second implementation uses bounded registers. Its correctness proof follows the ideas of the unbounded implementation. Both constructions implement a singlewriter snapshot memory, in which each word may be updated by only one process, from singlewriter, nreader registers. The third algorithm implements a multiwriter snapshot memory from atomic nwriter, nreader registers, again echoing key ideas from the earlier constructions. All operations require \Theta(n 2 ) reads and writes to the component shared registers in the worst case. Categories and Subject Discriptors:...
The Topological Structure of Asynchronous Computability
 JOURNAL OF THE ACM
, 1996
"... We give necessary and sufficient combinatorial conditions characterizing the tasks that can be solved by asynchronous processes, of which all but one can fail, that communicate by reading and writing a shared memory. We introduce a new formalism for tasks, based on notions from classical algebra ..."
Abstract

Cited by 114 (11 self)
 Add to MetaCart
We give necessary and sufficient combinatorial conditions characterizing the tasks that can be solved by asynchronous processes, of which all but one can fail, that communicate by reading and writing a shared memory. We introduce a new formalism for tasks, based on notions from classical algebraic and combinatorial topology, in which a task's possible input and output values are each associated with highdimensional geometric structures called simplicial complexes. We characterize computability in terms of the topological properties of these complexes. This characterization has a surprising geometric interpretation: a task is solvable if and only if the complex representing the task's allowable inputs can be mapped to the complex representing the task's allowable outputs by a function satisfying certain simple regularity properties. Our formalism thus replaces the "operational" notion of a waitfree decision task, expressed in terms of interleaved computations unfolding ...
The asynchronous computability theorem for tresilient tasks
 In Proceedings of the 1993 ACM Symposium on Theory of Computing
, 1993
"... We give necessary and sufficient combinatorial conditions characterizing the computational tasks that can be solved by N asynchronous processes, up to t of which can fail by halting. The range of possible input and output values for an asynchronous task can be associated with a highdimensional geom ..."
Abstract

Cited by 94 (14 self)
 Add to MetaCart
We give necessary and sufficient combinatorial conditions characterizing the computational tasks that can be solved by N asynchronous processes, up to t of which can fail by halting. The range of possible input and output values for an asynchronous task can be associated with a highdimensional geometric structure called a simplicial complex. Our main theorem characterizes computability y in terms of the topological properties of this complex. Most notably, a given task is computable only if it can be associated with a complex that is simply connected with trivial homology groups. In other words, the complex has “no holes!” Applications of this characterization include the first impossibility results for several longstanding open problems in distributed computing, such as the “renaming ” problem of Attiya et. al., the “kset agreement ” problem of Chaudhuri, and a generalization of the approximate agreement problem. 1
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
A simple constructive computability theorem for waitfree computation
 In: Proceedings of the 1994 ACM Symposium on Theory of Computing 243–252
, 1994
"... I ..."
TimeLapse Snapshots
 Proceedings of Israel Symposium on the Theory of Computing and Systems
, 1994
"... A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial decrease in ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial decrease in conceptual complexity is quite often counterbalanced by an increase in computational complexity. In this paper, we introduce the notion of a weak snapshot scan, a slightly weaker primitive that has a more efficient implementation. We propose the following methodology for using this abstraction: first, design and verify an algorithm using the more powerful snapshot scan, and second, replace the more powerful but less efficient snapshot with the weaker but more efficient snapshot, and show that the weaker abstraction nevertheless suffices to ensure the correctness of the enclosing algorithm. We give two examples of algorithms whose performance can be enhanced while retaining a simple m...
MessageOptimal Protocols For Byzantine Agreement
 MATHEMATICAL SYSTEMS THEORY
, 1991
"... It is often important for the correct processes in a distributed system to reach agreement, despite the presence of some faulty processes. Byzantine agreement (BA) is a paradigm problem that attempts to isolate the key features of reaching agreement. We focus here on the number of messages required ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
It is often important for the correct processes in a distributed system to reach agreement, despite the presence of some faulty processes. Byzantine agreement (BA) is a paradigm problem that attempts to isolate the key features of reaching agreement. We focus here on the number of messages required to reach BA, with particular emphasis on the number of messages required in the failurefree runs, since these are the ones that occur most often in practice. The number of messages required is sensitive to the types of failures considered. In earlier work, Amdur et al. [1990] established tight upper and lower bounds on the worst and averagecase number of messages required in failurefree runs for crash failures. We provide tight upper and lower bounds for all remaining types of failures that have been considered in the literature on the BA problem: receiving omission, sending omission and general omission failures, as well as arbitrary failures with or without message authentication. We ...
Universal Operations: Unary Versus Binary
, 1996
"... 1 1 Introduction 2 2 Related Work 5 3 Preliminaries 7 3.1 The Asynchronous SharedMemory Model : : : : : : : : : : : : : : : : : : : 7 3.2 Sensitivity : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 4 The Left/Right Algorithm 11 4.1 The General Scheme : : : : : : : ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
1 1 Introduction 2 2 Related Work 5 3 Preliminaries 7 3.1 The Asynchronous SharedMemory Model : : : : : : : : : : : : : : : : : : : 7 3.2 Sensitivity : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 4 The Left/Right Algorithm 11 4.1 The General Scheme : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 4.2 The Left/Right Algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13 4.2.1 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13 4.2.2 The code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14 4.2.3 Correctness of the Algorithm : : : : : : : : : : : : : : : : : : : : : : 16 4.2.4 Analysis of the Algorithm : : : : : : : : : : : : : : : : : : : : : : : : 18 4.3 Inherently Asymmetric Data Structures : : : : : : : : : : : : : : : : : : : : 21 5 The Decision Algorithm 23 5.1 Monotone Paths : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 23 5.1.1 One Phase :...
A Time Complexity Lower Bound for Randomized Implementations of Some Shared Objects
 In Symposium on Principles of Distributed Computing
, 1998
"... Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect th ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect that all n processes are up, cannot be solved unless some process performs#for n) sharedmemory operations. Using this basic result, we derive a#230 n) lower bound on the worstcase sharedaccess time complexity of nprocess implementations of several types of objects, including fetch&increment, fetch&multiply, fetch&and, queue, and stack. (The worstcase sharedaccess time complexity of an implementation is the number of sharedmemory operations that a process performs, in the worstcase, in order to complete a single operation on the implementation.) Our lower bound is strong in several ways: it holds even if (1) sharedmemory has an infinite number of words, each of unbounded size, (2) sh...
DCASBased Concurrent Deques
, 2000
"... The computer industry is currently examining the use of strong synchronization operations such as double compareandswap (DCAS) as a means of supporting nonblocking synchronization on tomorrow's multiprocessor machines. However, before such a strong primitive will be incorporated into hardware des ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
The computer industry is currently examining the use of strong synchronization operations such as double compareandswap (DCAS) as a means of supporting nonblocking synchronization on tomorrow's multiprocessor machines. However, before such a strong primitive will be incorporated into hardware design, its utility needs to be proven by developing a body of effective nonblocking data structures using DCAS. As part of this effort, we present two new linearizable nonblocking implementations of concurrent deques using the DCAS operation. The first uses an array representation, and improves on former algorithms by allowing uninterrupted concurrent access to both ends of the deque while correctly handling the difficult boundary cases when the deque is empty or full. The second uses a linkedlist representation, and is the first nonblocking unboundedmemory deque implementation. It too allows uninterrupted concurrent access to both ends of the deque.