Results 1  10
of
52
Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors
, 1981
"... In this paper we implement several basic operating system primitives by using a "replaceadd" operation, which can supersede the standard "test and set", and which appears to be a universal primitive for efficiently coordinating large numbers of independently acting sequential processors. We also pr ..."
Abstract

Cited by 89 (2 self)
 Add to MetaCart
In this paper we implement several basic operating system primitives by using a "replaceadd" operation, which can supersede the standard "test and set", and which appears to be a universal primitive for efficiently coordinating large numbers of independently acting sequential processors. We also present a hardware implementation of replaceadd that permits multiple replaceadds to be processed nearly as efficiently as loads and stores. Moreover, the crucial special case of concurrent replaceadds updating the same variable is handled particularly well: If every PE simultaneously addresses a replaceadd at the same variable, all these requests are satisfied in the time required to process just one request.
Constructing TwoWriter Atomic Registers
, 1987
"... In this paper, we construct a 2writer, nreader atomic memory register from two lwriter, (n + l)reader atomic memory registers. There are no restrictions on the size of the constructed register. The simulation requires only a single extra bit per real register, and can survive the failure of any ..."
Abstract

Cited by 72 (0 self)
 Add to MetaCart
In this paper, we construct a 2writer, nreader atomic memory register from two lwriter, (n + l)reader atomic memory registers. There are no restrictions on the size of the constructed register. The simulation requires only a single extra bit per real register, and can survive the failure of any set of readers and writers. This construction is a part of a systematic investigation of register simulations, by several researchers.
The Mutual Exclusion Problem  Part II: Statement and Solutions
, 2000
"... The theory developed in Part I is used to state the mutual exclusion problem and several additional fairness and failuretolerance requirements. Four "distributed " Nprocess solutions are given, ranging from a solution requiring only one communication bit per process that permits individual starvat ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
The theory developed in Part I is used to state the mutual exclusion problem and several additional fairness and failuretolerance requirements. Four "distributed " Nprocess solutions are given, ranging from a solution requiring only one communication bit per process that permits individual starvation, to one requiring about N ! communication bits per process that satisfies every reasonable fairness and failuretolerance requirement that we can conceive of. Contents 1 Introduction 3 2 The Problem 4 2.1 Basic Requirements . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Fairness Requirements . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Premature Termination . . . . . . . . . . . . . . . . . . . . . 8 2.4 Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3 The Solutions 14 3.1 The Mutual Exclusion Protocol . . . . . . . . . . . . . . . . . 15 3.2 The OneBit Solution . . . . . . . . . . . . . . . . . . . . . . 17 3.3 A Digression . . . . . . . . . . . ...
On Describing the Behavior and Implementation of Distributed Systems
, 1981
"... A simple, basic and general model for describing both the (inputoutput) behavior and the implementation of distributed systems is presented. An important feature of the model is the separation of the machinery used to describe the implementation and the behavior. This feature makes the model potent ..."
Abstract

Cited by 48 (14 self)
 Add to MetaCart
A simple, basic and general model for describing both the (inputoutput) behavior and the implementation of distributed systems is presented. An important feature of the model is the separation of the machinery used to describe the implementation and the behavior. This feature makes the model potentially useful for design specification of systems and of subsystems.
The Mutual Exclusion Problem  Part I: A Theory of Interprocess Communication
, 2000
"... A novel formal theory of concurrent systems is introduced that does not assume any atomic operations. The execution of a concurrent program is modeled as an abstract set of operation executions with two temporal ordering relations: "precedence" and "can causally a#ect". A primitive interprocess comm ..."
Abstract

Cited by 47 (4 self)
 Add to MetaCart
A novel formal theory of concurrent systems is introduced that does not assume any atomic operations. The execution of a concurrent program is modeled as an abstract set of operation executions with two temporal ordering relations: "precedence" and "can causally a#ect". A primitive interprocess communication mechanism is then defined. In Part II, the mutual exclusion is expressed precisely in terms of this model, and solutions using the communication mechanism are given. Contents 1 Introduction 2 2 The Model 2 2.1 Physical Considerations . . . . . . . . . . . . . . . . . . . . . 3 2.2 System Executions . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 HigherLevel Views . . . . . . . . . . . . . . . . . . . . . . . . 7 3 Interprocess Communication 9 4 Processes 14 5 MultipleReader Variables 17 6 Discussion of the Assumptions 18 7 Conclusion 19 1 1 Introduction The mutual exclusion problem was first described and solved by Dijkstra in [3]. In this problem, there is a collection...
A Fast, Scalable Mutual Exclusion Algorithm
 Distributed Computing
, 1994
"... This paper is concerned with synchronization under read/write atomicity in shared memory multiprocessors. We present a new algorithm for Nprocess mutual exclusion that requires only read and write operations and that has O(log N) time complexity, where "time" is measured by counting remote memory r ..."
Abstract

Cited by 45 (12 self)
 Add to MetaCart
This paper is concerned with synchronization under read/write atomicity in shared memory multiprocessors. We present a new algorithm for Nprocess mutual exclusion that requires only read and write operations and that has O(log N) time complexity, where "time" is measured by counting remote memory references. The time complexity of this algorithm is better than that of all prior solutions to the mutual exclusion problem that are based upon atomic read and write instructions; in fact, the time complexity of most prior solutions is unbounded. Performance studies are presented that show that our mutual exclusion algorithm exhibits scalable performance under heavy contention. In fact, its performance rivals that of the fastest queuebased spin locks based on strong primitives such as compareandswap and fetchandadd. We also present a modified version of our algorithm that generates only O(1) memory references in the absence of contention. Keywords: Fast mutual exclusion, local spinning,...
Are WaitFree Algorithms Fast?
, 1991
"... The time complexity of waitfree algorithms in "normal" executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any waitfree algorithm that achieves approximate agreement among n processes is proved. ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
The time complexity of waitfree algorithms in "normal" executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any waitfree algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a nonwaitfree algorithm that solves this problem in constant time. This implies an (log n) time separation between the waitfree and nonwaitfree computation models. On the positive side, we present an O(log n) time waitfree approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.
Using Mappings to Prove Timing Properties
, 1989
"... A new technique for proving trng properties for tmgbased algorithms is described; it is an extension of the mapping techniques previously used in proofs of safety properties for asynchronous concurrent systems. The key to the method is a way of representing a system with timing constraints as an a ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
A new technique for proving trng properties for tmgbased algorithms is described; it is an extension of the mapping techniques previously used in proofs of safety properties for asynchronous concurrent systems. The key to the method is a way of representing a system with timing constraints as an automaton whose state includes predictive tmg information. .Timing assumptions and tnng requirements for the system are both represented in this way. A multivalued mapping from the assumptions automaton" to the requirements automaton" is then used to show that th given system satisfies the requirements. One type of mapping is based on a collection of proress functions" providing measures of proress toward timing goals. The technique is illustrated with two examples, a simple resource manager and a two process race system.
Resource Allocation With Immunity To Limited Process Failure
, 1979
"... Upper and lower bounds are proved for the shared space requirements for solution of several problems involving resource allocation among asynchronous processes. Controlling the degradation of performance when a limited number of processes fail is of particular interest. ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
Upper and lower bounds are proved for the shared space requirements for solution of several problems involving resource allocation among asynchronous processes. Controlling the degradation of performance when a limited number of processes fail is of particular interest.