Results 1  10
of
70
Atomic Snapshots of Shared Memory
, 1993
"... . This paper introduces a general formulation of atomic snapshot memory, a shared memory partitioned into words written (updated) by individual processes, or instantaneously read (scanned) in its entirety. This paper presents three waitfree implementations of atomic snapshot memory. The first imple ..."
Abstract

Cited by 169 (44 self)
 Add to MetaCart
. This paper introduces a general formulation of atomic snapshot memory, a shared memory partitioned into words written (updated) by individual processes, or instantaneously read (scanned) in its entirety. This paper presents three waitfree implementations of atomic snapshot memory. The first implementation in this paper uses unbounded (integer) fields in these registers, and is particularly easy to understand. The second implementation uses bounded registers. Its correctness proof follows the ideas of the unbounded implementation. Both constructions implement a singlewriter snapshot memory, in which each word may be updated by only one process, from singlewriter, nreader registers. The third algorithm implements a multiwriter snapshot memory from atomic nwriter, nreader registers, again echoing key ideas from the earlier constructions. All operations require \Theta(n 2 ) reads and writes to the component shared registers in the worst case. Categories and Subject Discriptors:...
Total order broadcast and multicast algorithms: Taxonomy and survey
 ACM COMPUTING SURVEYS
, 2004
"... ..."
Causal Memory: Definitions, Implementation and Programming
, 1994
"... The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to larg ..."
Abstract

Cited by 95 (9 self)
 Add to MetaCart
The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by defining causal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that are causally related. Because causal memory is weakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for messagepassing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory. College of Computing Georgia Institute of Technology Atlanta, Georgia 303320280 This ...
Modelling Knowledge and Action in Distributed Systems
 Distributed Computing
, 1988
"... : We present a formal model that captures the subtle interaction between knowledge and action in distributed systems. We view a distributed system as a set of runs, where a run is a function from time to global states and a global state is a tuple consisting of an environment state and a local state ..."
Abstract

Cited by 85 (28 self)
 Add to MetaCart
: We present a formal model that captures the subtle interaction between knowledge and action in distributed systems. We view a distributed system as a set of runs, where a run is a function from time to global states and a global state is a tuple consisting of an environment state and a local state for each process in the system. This model is a generalization of those used in many previous papers. Actions in this model are associated with functions from global states to global states. A protocol is a function from local states to actions. We extend the standard notion of a protocol by defining knowledgebased protocols, ones in which a process' actions may depend explicitly on its knowledge. Knowledgebased protocols provide a natural way of describing how actions should take place in a distributed system. Finally, we show how the notion of one protocol implementing another can be captured in our model. Some material in this paper appeared in preliminary form in [HF85]. An abridge...
Are WaitFree Algorithms Fast?
, 1991
"... The time complexity of waitfree algorithms in "normal" executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any waitfree algorithm that achieves approximate agreement among n processes is proved. ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
The time complexity of waitfree algorithms in "normal" executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any waitfree algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a nonwaitfree algorithm that solves this problem in constant time. This implies an (log n) time separation between the waitfree and nonwaitfree computation models. On the positive side, we present an O(log n) time waitfree approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.
Timing and Causality in Process Algebra
 Acta Informatica
, 1992
"... . There has been considerable controversy in concurrency theory between the `interleaving' and `true concurrency' schools. The former school advocates associating a transition system with a process which captures concurrent execution via the interleaving of occurrences; the latter adopts more comple ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
. There has been considerable controversy in concurrency theory between the `interleaving' and `true concurrency' schools. The former school advocates associating a transition system with a process which captures concurrent execution via the interleaving of occurrences; the latter adopts more complex semantic structures to avoid reducing concurrency to interleaving. In this paper we show that the two approaches are not irreconcilable. We define a timed process algebra where occurrences are associated with intervals of time, and give it a transition system semantics. This semantics has many of the advantages of the interleaving approach; the algebra admits an expansion theorem, and bisimulation semantics can be used as usual. Our transition systems, however, incorporate timing information, and this enables us to express concurrency: merely adding timing appropriately generalises transition systems to asynchronous transition systems, showing that time gives a link between true concurrenc...
Reading Many Variables in One Atomic Operation Solutions With Linear or Sublinear Complexity
 IN PROCEEDINGS OF THE 5TH INTERNATIONAL WORKSHOP ON DISTRIBUTED ALGORITHMS
, 1991
"... We address the problem of reading more than one variables (components) X 1 ; : : : ; X c , all in one atomic operation, by only one process called the reader, while each of these variables are being written by a set of writers. All operations (i.e. both reads and writes) are assumed to be totally ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
We address the problem of reading more than one variables (components) X 1 ; : : : ; X c , all in one atomic operation, by only one process called the reader, while each of these variables are being written by a set of writers. All operations (i.e. both reads and writes) are assumed to be totally asynchronous and waitfree. For this problem, only algorithms that require at best quadratic time and space complexity can be derived from the existing literature (the time complexity of a construction is the number of suboperations of a highlevel operation and its space complexity is the number of atomic shared variables it needs). In this paper, we provide a deterministic protocol which has linear (in the number of processes) space complexity, linear time complexity for a read operation and constant time complexity for a write. Our solution does not make use of timestamps. Rather, it is the memory location where a write writes that differentiates it from the other writes. Also, ...
Optimal MultiWriter MultiReader Atomic Register
 In Proceedings of the 11th ACM Symposium on Principles of Distributed Computing
, 1992
"... . This paper addresses the wide gap in space complexity of atomic, multiwriter, multireader register implementations. While the space complexity of all previous implementations is linear, the lower bounds are logarithmic. We present two implementations which close this gap: The first implementation ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
. This paper addresses the wide gap in space complexity of atomic, multiwriter, multireader register implementations. While the space complexity of all previous implementations is linear, the lower bounds are logarithmic. We present two implementations which close this gap: The first implementation uses multireader physical registers while the second uses singlereader physical registers. Both implementations are optimal with respect to the two most important complexity criteria: Their space complexity is logarithmic and their time complexity is linear. 1991 Mathematics Subject Classification: 68M10, 68Q22, 68Q25. CR Categories: B.3.2, B.4.3, D.4.1, D.4.4. Keywords and Phrases: Shared Register, Concurrent Reading and Writind, Atomicity, Multiwriter Register. Note: This work is partially supported by NWO through NFI Project ALADDIN under Contract number NF 62376. A preliminary version of this paper was presented in the 11th Annual ACM Symposium on Principles of Distributed Computing, August 1992, Vancouver, Canada. 1
On Interprocess Communication and the Implementation of MultiWriter Atomic Registers
, 1995
"... Two protocols for implementing nwriter mreader atomic registers with 1writer mreader atomic registers are described. In order to give complete proofs, a theory of interprocess communication is presented rst. The correctness of a protocol that implements an atomic register is proved here in t ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Two protocols for implementing nwriter mreader atomic registers with 1writer mreader atomic registers are described. In order to give complete proofs, a theory of interprocess communication is presented rst. The correctness of a protocol that implements an atomic register is proved here in two stages: (1) A formulation of higherlevel speci cations and a proof that the protocol satises these specications.
Bounded Concurrent TimeStamping
 SIAM JOURNAL ON COMPUTING
, 1997
"... We introduce concurrent timestamping, a paradigm that allows processes to temporally order concurrent events in an asynchronous sharedmemory system. Concurrent timestamp systems are powerful tools for concurrency control, serving as the basis for solutions to coordination problems such as mutual ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We introduce concurrent timestamping, a paradigm that allows processes to temporally order concurrent events in an asynchronous sharedmemory system. Concurrent timestamp systems are powerful tools for concurrency control, serving as the basis for solutions to coordination problems such as mutual exclusion, lexclusion, randomized consensus, and multiwriter multireader atomic registers. Unfortunately, all previously known methods for implementing concurrent timestamp systems have been theoretically unsatisfying since they require unboundedsize timestamps  in other words, unboundedsize memory. This work presents the first bounded implementation of a concurrent timestamp system, providing a modular unboundedtobounded transformation of the simple unbounded solutions to problems such as those mentioned above. It allows solutions to two formerly open problems, the boundedprobabilistic consensus problem of Abrahamson and the fifolexclusion problem of Fischer, Lynch, Burns and...