## Are Wait-Free Algorithms Fast? (1991)

### Cached

### Download Links

- [theory.lcs.mit.edu]
- [www.math.tau.ac.il]
- DBLP

### Other Repositories/Bibliography

Citations: | 39 - 11 self |

### BibTeX

@MISC{Attiya91arewait-free,

author = {Hagit Attiya and Nancy Lynch and Nir Shavit},

title = {Are Wait-Free Algorithms Fast?},

year = {1991}

}

### OpenURL

### Abstract

The time complexity of wait-free algorithms in "normal" executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any wait-free algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a non-wait-free algorithm that solves this problem in constant time. This implies an (log n) time separation between the wait-free and non-wait-free computation models. On the positive side, we present an O(log n) time wait-free approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.

### Citations

1520 | Impossibility of distributed consensus with one faulty process
- FISCHER, LYNCH, et al.
- 1985
(Show Context)
Citation Context ...ts must all be within a given distance " of each other, and must be included within the range of the inputs. This problem, a weaker variant of the well-studied problem of distributed consensus (e=-=.g., [22, 31]-=-), is closely related to the important problem of synchronizing local clocks in a distributed system. Approximate agreement can be achieved very easily if waiting is allowed, by having a designated pr... |

1314 | The Byzantine Generals Problem
- Lamport, Shostak, et al.
- 1982
(Show Context)
Citation Context ...ts must all be within a given distance " of each other, and must be included within the range of the inputs. This problem, a weaker variant of the well-studied problem of distributed consensus (e=-=.g., [22, 31]-=-), is closely related to the important problem of synchronizing local clocks in a distributed system. Approximate agreement can be achieved very easily if waiting is allowed, by having a designated pr... |

763 | Wait-Free Synchronization
- Herlihy
- 1991
(Show Context)
Citation Context ... a mechanism for computing in the face of variable speeds and failures: a wait-free algorithm guarantees that each nonfaulty process terminates regardless of the speed and failure of other processes (=-=[24, 29]-=-). 1 The design of wait-free algorithms has been a very active area of research recently (see, e.g., [1, 2, 4, 15, 24, 29, 30, 33, 43, 44, 46, 49]). Because wait-free algorithms guarantee that fast pr... |

331 | Proving the correctness of multiprocess programs
- Lamport
- 1977
(Show Context)
Citation Context ...thms may fail to terminate in this case, the comparison should only be made in executions in which no process fails, i.e., in failure-free executions. The time measure we use is the one introduced in =-=[27, 28]-=-, and used to evaluate the time complexity of asynchronous algorithms, in, e.g., [3, 13, 35, 36, 45]. To summarize, we are interested in measuring the time cost imposed by the wait-free property, as m... |

185 | Hints for Computer System Design
- Lampson
- 1983
(Show Context)
Citation Context ...time complexity in "normal executions," i.e., executions where no failures occur and processes run at approximately the same pace, while building in safety provisions to protect against fail=-=ures (cf. [32]-=-). Our results indicate that, in the asynchronous shared-memory setting, there are problems for which building in such safety provisions must result in performance degradation in the normal executions... |

174 | Atomic Snapshots of Shared Memory - Afek, Attiya, et al. - 1990 |

158 |
Memory Requirements for Agreement Among Unreliable Asynchronous Processes
- Loui, Abu-Amara
- 1987
(Show Context)
Citation Context ...ure research. 2 Model of Computation and Time Measure In this section we describe the systems and the time measure we will consider. Our definitions are standard and are similar to the ones in, e.g., =-=[3, 24, 29, 30, 34, 35]-=-. A system consists of n processes p 0 ; : : : ; p n\Gamma1 . Each process is a deterministic state machine, with a possibly infinite number of states. We associate with each process a set of local st... |

134 |
Knowledge and common knowledge in a Byzantine environment I: crash failures, Theoretical Aspects of Reasoning about
- Dwork, Moses
- 1986
(Show Context)
Citation Context ...executions. This situation contrasts with that occurring, for example, in synchronous systems that solve the distributed consensus problem. In that setting, there are early-stopping algorithms (e.g., =-=[17, 19, 41]-=-) that tolerate failures, yet still terminate in constant time when 2 The lower bound is attained in an execution where processes run synchronously and no process fails. 2 no failures occur. The exact... |

133 | Fast randomized consensus using shared memory
- Aspnes, Herlihy
- 1990
(Show Context)
Citation Context ... each nonfaulty process terminates regardless of the speed and failure of other processes ([24, 29]). 1 The design of wait-free algorithms has been a very active area of research recently (see, e.g., =-=[1, 2, 4, 15, 24, 29, 30, 33, 43, 44, 46, 49]-=-). Because wait-free algorithms guarantee that fast processes terminate without waiting for slow processes, wait-free algorithms seem to be generally thought of as fast. However, while it is obvious f... |

126 |
Sharing memory robustly in message-passing systems
- Attiya, Bar-Noy, et al.
- 1995
(Show Context)
Citation Context ...: we have shown that log n 37 is a lower bound on the time complexity of any wait-free approximate agreement algorithm, while there exists an O(1) time non-wait-free algorithm. Using the emulators of =-=[5]-=-, our algorithms can be translated into algorithms that work in message-passing systems. The algorithms have the same time complexity (in complete networks) and are resilient to the failure of a major... |

110 |
On interprocess communication, Part I: Basic formalism
- Lamport
- 1986
(Show Context)
Citation Context ... a mechanism for computing in the face of variable speeds and failures: a wait-free algorithm guarantees that each nonfaulty process terminates regardless of the speed and failure of other processes (=-=[24, 29]-=-). 1 The design of wait-free algorithms has been a very active area of research recently (see, e.g., [1, 2, 4, 15, 24, 29, 30, 33, 43, 44, 46, 49]). Because wait-free algorithms guarantee that fast pr... |

109 | Composite Registers
- Anderson
- 1993
(Show Context)
Citation Context ... each nonfaulty process terminates regardless of the speed and failure of other processes ([24, 29]). 1 The design of wait-free algorithms has been a very active area of research recently (see, e.g., =-=[1, 2, 4, 15, 24, 29, 30, 33, 43, 44, 46, 49]-=-). Because wait-free algorithms guarantee that fast processes terminate without waiting for slow processes, wait-free algorithms seem to be generally thought of as fast. However, while it is obvious f... |

107 | Reaching approximate agreement in the presence of faults
- DOLEV, LYNCH, et al.
- 1986
(Show Context)
Citation Context ... computation time in the most normal (failure-free) case. In this paper, we address the general question by considering a specific problem---the approximate agreement problem studied, for example, in =-=[16, 20, 21, 37]-=-; we study this problem in the context of a particular shared-memory primitive---single-writer multi-reader atomic registers. In this problem, each process starts with a real-valued input, and (provid... |

106 | Atomic Shared Register Access by Asynchronous Hardware
- Vitanyi, Awerbuch
- 1986
(Show Context)
Citation Context ... each nonfaulty process terminates regardless of the speed and failure of other processes ([24, 29]). 1 The design of wait-free algorithms has been a very active area of research recently (see, e.g., =-=[1, 2, 4, 15, 24, 29, 30, 33, 43, 44, 46, 49]-=-). Because wait-free algorithms guarantee that fast processes terminate without waiting for slow processes, wait-free algorithms seem to be generally thought of as fast. However, while it is obvious f... |

94 | Programming simultaneous actions using common knowledge - Moses, Tuttle |

87 |
Concurrent Reading While Writing
- Peterson
- 1983
(Show Context)
Citation Context |

83 | Impossibility and Universality Results for Wait-Free Synchronization - Herlihy - 1988 |

75 |
The APRAM: Incorporating Asynchrony into the PRAM Model
- Cole, Zajicek
- 1989
(Show Context)
Citation Context ... in which no process fails, i.e., in failure-free executions. The time measure we use is the one introduced in [27, 28], and used to evaluate the time complexity of asynchronous algorithms, in, e.g., =-=[3, 13, 35, 36, 45]-=-. To summarize, we are interested in measuring the time cost imposed by the wait-free property, as measured in terms of extra computation time in the most normal (failure-free) case. In this paper, we... |

68 |
Efficient robust parallel computations
- Kedem, Palem, et al.
- 1990
(Show Context)
Citation Context ...s addressed the issue of adapting the usual synchronous shared-memory PRAM model to better reflect implementation issues, by reducing synchrony ([13, 14, 23, 38, 42]) or by requiring fault-tolerance (=-=[25, 26]-=-). To the best of our knowledge, the impact of the combination of asynchrony and fault-tolerance (as exemplified by the wait-free model) on the time complexity of shared-memory algorithms has not prev... |

68 |
Economical Solutions for the Critical Section Problem in a Distributed System
- Peterson, Fischer
- 1977
(Show Context)
Citation Context ... in which no process fails, i.e., in failure-free executions. The time measure we use is the one introduced in [27, 28], and used to evaluate the time complexity of asynchronous algorithms, in, e.g., =-=[3, 13, 35, 36, 45]-=-. To summarize, we are interested in measuring the time cost imposed by the wait-free property, as measured in terms of extra computation time in the most normal (failure-free) case. In this paper, we... |

64 |
Efficient parallel algorithms can be made robust. Distributed Computing 5(4), 201–217 (1992).A preliminary version appears
- Kanellakis, Shvartsman
- 1989
(Show Context)
Citation Context ...s addressed the issue of adapting the usual synchronous shared-memory PRAM model to better reflect implementation issues, by reducing synchrony ([13, 14, 23, 38, 42]) or by requiring fault-tolerance (=-=[25, 26]-=-). To the best of our knowledge, the impact of the combination of asynchrony and fault-tolerance (as exemplified by the wait-free model) on the time complexity of shared-memory algorithms has not prev... |

61 |
Concurrent reading while writing II: The multi-writer case
- PETERSON, BURNS
- 1987
(Show Context)
Citation Context |

60 |
On interprocess communication, part II: Algorithms
- Lamport
- 1986
(Show Context)
Citation Context |

53 |
The Complexity of Parallel Computations
- Wyllie
- 1979
(Show Context)
Citation Context ...ry, and only on them, there exists an optimally fast wait-free solution. Our algorithm, presented in Figure 7, is a wait-free variation of the pointer-jumping technique used in PRAM algorithms (e.g., =-=[50]-=-). Think of the registers R i , i 2 f1::ng, as being arranged in a circle (hence indices are modulo n). To achieve logarithmic time complexity, a process writes in the register R i not only its value,... |

50 | On Describing the Behavior and Implementation of Distributed Systems
- Lynch, Fischer
- 1981
(Show Context)
Citation Context ... in which no process fails, i.e., in failure-free executions. The time measure we use is the one introduced in [27, 28], and used to evaluate the time complexity of asynchronous algorithms, in, e.g., =-=[3, 13, 35, 36, 45]-=-. To summarize, we are interested in measuring the time cost imposed by the wait-free property, as measured in terms of extra computation time in the most normal (failure-free) case. In this paper, we... |

48 | Inexact agreement: Accuracy, precision, and graceful degradation
- Mahaney, Schneider
- 1985
(Show Context)
Citation Context ... computation time in the most normal (failure-free) case. In this paper, we address the general question by considering a specific problem---the approximate agreement problem studied, for example, in =-=[16, 20, 21, 37]-=-; we study this problem in the context of a particular shared-memory primitive---single-writer multi-reader atomic registers. In this problem, each process starts with a real-valued input, and (provid... |

42 |
Optimal time randomized consensus - making resilient algorithms fast in practice
- Saks, Shavit, et al.
- 1991
(Show Context)
Citation Context ...hms for all problems that have wait-free solutions? Since the preliminary presentation of our work, first steps have been made towards answering this question in the context of randomized computation =-=[47]-=-. Based on the alternatedinterleaving method presented in Section 6.2, Saks, Shavit and Woll [47] are able to show that any decision problem that has a wait-free or expected wait-free 10 solution algo... |

39 |
Crash Recovery in a Distributed Database Systems
- Skeen
- 1982
(Show Context)
Citation Context ... it is easy to see that the time complexity of this algorithm is constant---independent 1 Wait-free is the shared-memory analogue of the non-blocking property for synchronous transaction systems (cf. =-=[11, 48]). 1 -=-of n, the range of inputs and ". On the other hand, there is a relatively simple wait-free algorithm for this problem, which we describe in Section 3, and which is based on successive averaging o... |

38 |
Park.Asynchronous PRAMs are (almost) as good as synchronous PRAMs
- Martel, Subramonian, et al.
- 1990
(Show Context)
Citation Context ...owledge, the impact of the combination of asynchrony and fault-tolerance (as exemplified by the wait-free model) on the time complexity of shared-memory algorithms has not previously been studied. In =-=[39]-=-, Martel, Subramonian and Park present efficient fault-tolerant asynchronous PRAM algorithms. Their algorithms optimize work rather than time and employ randomization. Another major difference is that... |

37 |
The expected advantage of asynchrony
- Cole, Zajicek
- 1990
(Show Context)
Citation Context ...xecutions, as blocking protocols ([11]). Recent work has addressed the issue of adapting the usual synchronous shared-memory PRAM model to better reflect implementation issues, by reducing synchrony (=-=[13, 14, 23, 38, 42]-=-) or by requiring fault-tolerance ([25, 26]). To the best of our knowledge, the impact of the combination of asynchrony and fault-tolerance (as exemplified by the wait-free model) on the time complexi... |

35 |
A combinatorial characterization of the distributed tasks that are solvable in the presence of one faulty processor
- Biran, Moran, et al.
- 1988
(Show Context)
Citation Context ... other problems. We believe, for example, that the O(1) time algorithm for two-process approximate agreement can be generalized to any decision problem of size 2, using the characterization result of =-=[9]-=-. It is interesting to explore whether similar results can be proved for problems that require repeated coordination (e.g., `-exclusion). Finally, there remains the fundamental unanswered question rai... |

35 |
Asynchronous shared memory parallel computation
- Nishimura
- 1990
(Show Context)
Citation Context ...xecutions, as blocking protocols ([11]). Recent work has addressed the issue of adapting the usual synchronous shared-memory PRAM model to better reflect implementation issues, by reducing synchrony (=-=[13, 14, 23, 38, 42]-=-) or by requiring fault-tolerance ([25, 26]). To the best of our knowledge, the impact of the combination of asynchrony and fault-tolerance (as exemplified by the wait-free model) on the time complexi... |

34 |
Work-optimal asynchronous algorithms for shared memory parallel computers
- Martel, Park, et al.
- 1992
(Show Context)
Citation Context ...xecutions, as blocking protocols ([11]). Recent work has addressed the issue of adapting the usual synchronous shared-memory PRAM model to better reflect implementation issues, by reducing synchrony (=-=[13, 14, 23, 38, 42]-=-) or by requiring fault-tolerance ([25, 26]). To the best of our knowledge, the impact of the combination of asynchrony and fault-tolerance (as exemplified by the wait-free model) on the time complexi... |

31 | Time Bounds for Real-Time Process Control in the sence of Timing Uncertainty
- Attiya, Lynch
- 1989
(Show Context)
Citation Context ... if X is nonempty then diam(X) is the length of the interval range(X). If X is nonempty, then mid(X) = min x2X x+max x2X x 2 . 3 These definitions can also be formalized in the timed automaton model (=-=[40, 7]-=-). 4 Except, possibly, for the last segment. 5 function wait-approx (x) returns real ; begin 1: V 0 := x; 2: return x; end; Process p 0 function wait-approx (x) returns real ; begin 1: repeat until V ... |

28 |
Bounded polynomial randomized consensus
- Attiya, Dolev, et al.
- 1989
(Show Context)
Citation Context ...ed as a parameter, it is assumed that all processes have exactly the same value of ". 7 Though one can devise algorithms that do not require a process to maintain suggestions for all past phases =-=(cf. [6]), we-=- have chosen to maintain all suggestions in order to simplify the exposition and proofs. 8 shared var S : snapshot object [1::n] of array [1::] of real ; function wait-free-approx(x; ") returns r... |

25 |
On the minimal synchrony needed for distributed consensus
- Dolev, Dwork, et al.
- 1987
(Show Context)
Citation Context ...re and elsewhere, we let -- denote the index of the other process, i.e., -- = 1 \Gamma i. Due to the asynchrony in the system, it is impossible to have processes agree on one of the input values (see =-=[18, 22, 34]). Th-=-us, our algorithm has them gradually converge from the input values x 0 and x 1 to values that are only " apart. A process p i repeatedly does the following: it writes its value v i (initially th... |

25 |
Asymptotically optimal algorithms for approximate agreement
- Fekete
- 1990
(Show Context)
Citation Context ... computation time in the most normal (failure-free) case. In this paper, we address the general question by considering a specific problem---the approximate agreement problem studied, for example, in =-=[16, 20, 21, 37]-=-; we study this problem in the context of a particular shared-memory primitive---single-writer multi-reader atomic registers. In this problem, each process starts with a real-valued input, and (provid... |

24 |
Efficiency of synchronous versus asynchronous distributed systems
- Arjomandi, Fischer, et al.
- 1983
(Show Context)
Citation Context |

24 |
Eventual’ Is Earlier than ‘Immediate
- Dolev, Reischuk, et al.
- 1982
(Show Context)
Citation Context ...executions. This situation contrasts with that occurring, for example, in synchronous systems that solve the distributed consensus problem. In that setting, there are early-stopping algorithms (e.g., =-=[17, 19, 41]-=-) that tolerate failures, yet still terminate in constant time when 2 The lower bound is attained in an execution where processes run synchronously and no process fails. 2 no failures occur. The exact... |

13 |
The inherent cost of nonblocking commitment
- Dwork, Skeen
- 1983
(Show Context)
Citation Context ... it is easy to see that the time complexity of this algorithm is constant---independent 1 Wait-free is the shared-memory analogue of the non-blocking property for synchronous transaction systems (cf. =-=[11, 48]). 1 -=-of n, the range of inputs and ". On the other hand, there is a relatively simple wait-free algorithm for this problem, which we describe in Section 3, and which is based on successive averaging o... |

13 | Asynchronous approximate agreement
- Fekete
- 1994
(Show Context)
Citation Context |

9 | Towards a non-atomic era: ‘-exclusion as a test case
- Dolev, Gafni, et al.
- 1988
(Show Context)
Citation Context |

8 |
The synchronization of independent processes
- Lamport
- 1976
(Show Context)
Citation Context ...thms may fail to terminate in this case, the comparison should only be made in executions in which no process fails, i.e., in failure-free executions. The time measure we use is the one introduced in =-=[27, 28]-=-, and used to evaluate the time complexity of asynchronous algorithms, in, e.g., [3, 13, 35, 36, 45]. To summarize, we are interested in measuring the time cost imposed by the wait-free property, as m... |

7 |
Simultaneity is harder than agreement
- Coan, Dwork
- 1991
(Show Context)
Citation Context ... attained in an execution where processes run synchronously and no process fails. 2 no failures occur. The exact cost imposed by fault-tolerance on normal executions has been studied, for example, in =-=[10, 19, 41]-=-. For synchronous message-passing systems, it has been shown that non-blocking protocols take twice as much time, in failure-free executions, as blocking protocols ([11]). Recent work has addressed th... |

6 |
Towards Better Shared Memory Programming Models
- Gibbons
- 1988
(Show Context)
Citation Context ... range of inputs or on " (Section 5). The algorithm uses a novel method of overcoming the uncertainty that is inherent in an asynchronous environment, without resorting to synchronization points =-=(cf. [23]-=-) or other waiting mechanisms (cf. [13]): this method involves ensuring that the two processes base their decisions on information that is approximately, but not exactly, the same. Next, using a power... |

4 |
Upper and Lower Time Bounds for Parallel RAMS Without Simultaneous Writes
- Cook, Dwork, et al.
- 1986
(Show Context)
Citation Context ...ss p i . collects all the values, computing the function can be done locally in constant time. Since \Omega\Gammainc n) is a lower bound on the time for the information collection problem (see, e.g., =-=[12]-=-), this implies that for problems whose output depends on all the initial values in memory, and only on them, there exists an optimally fast wait-free solution. Our algorithm, presented in Figure 7, i... |

3 |
Lecture notes for 6.852
- Lynch, Goldman
- 1989
(Show Context)
Citation Context |

3 |
Time Constrained Automata," manuscript
- Merritt, Modugno, et al.
- 1988
(Show Context)
Citation Context ... if X is nonempty then diam(X) is the length of the interval range(X). If X is nonempty, then mid(X) = min x2X x+max x2X x 2 . 3 These definitions can also be formalized in the timed automaton model (=-=[40, 7]-=-). 4 Except, possibly, for the last segment. 5 function wait-approx (x) returns real ; begin 1: V 0 := x; 2: return x; end; Process p 0 function wait-approx (x) returns real ; begin 1: repeat until V ... |

2 |
How to share concurrent wait-free variables," ICALP
- Li, Tromp, et al.
- 1989
(Show Context)
Citation Context |

2 |
On the Correctness of Atomic Multi-Writer Registers," MIT/LCS/TM-364
- Schaffer
- 1988
(Show Context)
Citation Context |