Results 1  10
of
72
Composite Registers
 Distributed Computing
, 1993
"... We introduce a shared data object, called a composite register, that generalizes the notion of an atomic register. A composite register is an arraylike shared data object that is partitioned into a number of components. An operation of a composite register either writes a value to a single componen ..."
Abstract

Cited by 111 (7 self)
 Add to MetaCart
We introduce a shared data object, called a composite register, that generalizes the notion of an atomic register. A composite register is an arraylike shared data object that is partitioned into a number of components. An operation of a composite register either writes a value to a single component or reads the values of all components. A composite register reduces to an ordinary atomic register when there is only one component. In this paper, we show that multireader, singlewriter atomic registers can be used to implement a composite register in which there is only one writer per component. In a related paper, we show how to use the composite register construction of this paper to implement a composite register with multiple writers per component. These two constructions show that it is possible to implement a shared memory that can be read in its entirety in a single snapshot operation, without using mutual exclusion. Keywords: atomicity, atomic register, composite register, conc...
The Weakest Failure Detectors to Solve Certain Fundamental Problems in Distributed Computing (Extended Abstract)
, 2004
"... Carole DelporteGallet cd@liafa.jussieu.fr Hugues Fauconnier hf@liafa.jussieu.fr Rachid Guerraoui rachid.guerraoui@epfl.ch Vassos Hadzilacos vassos@cs.toronto.edu Petr Kouznetsov petr.kouznetsov@epfl.ch Sam Toueg sam@cs.toronto.edu ABSTRACT We determine the weakest failure d ..."
Abstract

Cited by 45 (11 self)
 Add to MetaCart
Carole DelporteGallet cd@liafa.jussieu.fr Hugues Fauconnier hf@liafa.jussieu.fr Rachid Guerraoui rachid.guerraoui@epfl.ch Vassos Hadzilacos vassos@cs.toronto.edu Petr Kouznetsov petr.kouznetsov@epfl.ch Sam Toueg sam@cs.toronto.edu ABSTRACT We determine the weakest failure detectors to solve several fundamental problems in distributed messagepassing systems, for all environments  i.e., regardless of the number and timing of crashes. The problems that we consider are: implementing an atomic register, solving consensus, solving quittable consensus (a variant of consensus in which processes have the option to decide `quit' if a failure occurs), and solving nonblocking atomic commit.
Bounded concurrent time stamps systems are constructible
 In Proc. 21st ACM Symp. on Theory of Computing. ACM SIGACT, ACM
, 1989
"... Concurrent time stamping is at the heart of solu tions to some of the most fundamental problems in distributed computing. Based on concurrent timestampsystems, elegant and simple solu tions to core problems such as fcf,mutual exclusion, construction of a multireadermulti writer atomic register. ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
Concurrent time stamping is at the heart of solu tions to some of the most fundamental problems in distributed computing. Based on concurrent timestampsystems, elegant and simple solu tions to core problems such as fcf,mutual exclusion, construction of a multireadermulti writer atomic register. probabilistic consensus,.. were developed. Unfortunmely, the only known implementation of a concurrent time stamp sys tem has been theoretically unsatisfying since it requires unbounded size timestamps, in other words, unbounded memory. Not knowing if bounded concurrenttimestampsystems are at all constructible, researchers were led to con structing complicated problemspecific solutions to replace the simple unbounded ones. In this work for the first time, a bounded iruplemen tation of a concurrenttimestampsystem is pre sented. It provides a modular unboundedtobounded transformation of the simple unbounded solutions to prob1err such as above It al lows solutions to two formerly open problems, the boundedprobabilisticconsensus problem of AbrahamBon [A88] and the fifotexclusion prob
How to Share Concurrent WaitFree Variables
, 1995
"... Sharing data between multiple asynchronous userseach of which can atomically read and write the datais a feature which may help to increase the amount of parallelism in distributed systems. An algorithm implementing this feature is presented. The main construction of an nuser atomic variable ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
Sharing data between multiple asynchronous userseach of which can atomically read and write the datais a feature which may help to increase the amount of parallelism in distributed systems. An algorithm implementing this feature is presented. The main construction of an nuser atomic variable directly from singlewriter, singlereader atomic variables uses O(n) control bits and O(n) accesses per Read/Write running in O(1) parallel time.
A Theory of Competitive Analysis for Distributed Algorithms
, 1994
"... We introduce a theory of competitive analysis for distributed algorithms. The first steps in this direction were made in the seminal papers of Bartal, Fiat, and Rabani [l'?], and of Awerbuch, Kutten, and Peleg [15], in the context of data management and job scheduling. In these papers, as well ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
We introduce a theory of competitive analysis for distributed algorithms. The first steps in this direction were made in the seminal papers of Bartal, Fiat, and Rabani [l'?], and of Awerbuch, Kutten, and Peleg [15], in the context of data management and job scheduling. In these papers, as well as in other subsequent work [l4, 4, 181, the cost of a distributed algorithm as compared to the cost of an optimal globalcontrol algorithm. Here we introduce a more refined notion of competitiveness for distributed algorithms, one that reflects the performance of distributed algorithms more accurately. In particular, our theory allows one to compare the cost of a distributed online algorithm to the cost of an optimal distributed algorithm. We demonstrate our method by studying the
The Elusive Atomic Register
, 1992
"... We present a construction of a singlewriter, multiplereader atomic register from singlewriter, singlereader atomic registers. The complexity of our construction is asymptotically optimal; O(M² + MN) shared singlewriter, singlereader safe bits are required to construct a singlewriter, Mrea ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
We present a construction of a singlewriter, multiplereader atomic register from singlewriter, singlereader atomic registers. The complexity of our construction is asymptotically optimal; O(M² + MN) shared singlewriter, singlereader safe bits are required to construct a singlewriter, Mreader, Nbit atomic register.
TimeLapse Snapshots
 Proceedings of Israel Symposium on the Theory of Computing and Systems
, 1994
"... A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial de ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial decrease in conceptual complexity is quite often counterbalanced by an increase in computational complexity. In this paper, we introduce the notion of a weak snapshot scan, a slightly weaker primitive that has a more efficient implementation. We propose the following methodology for using this abstraction: first, design and verify an algorithm using the more powerful snapshot scan, and second, replace the more powerful but less efficient snapshot with the weaker but more efficient snapshot, and show that the weaker abstraction nevertheless suffices to ensure the correctness of the enclosing algorithm. We give two examples of algorithms whose performance can be enhanced while retaining a simple m...
Concurrent Timestamping Made Simple
 Proceedings of Israel Symposium on Theory of Computing and Systems
, 1995
"... Concurrent Timestamp Systems (ctss) allow processes to temporally order concurrent events in an asynchronous shared memory system, a powerful tool for concurrency control, serving as the basis for solutions to coordination problems such as mutual exclusion, `exclusion, randomized consensus, and m ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
Concurrent Timestamp Systems (ctss) allow processes to temporally order concurrent events in an asynchronous shared memory system, a powerful tool for concurrency control, serving as the basis for solutions to coordination problems such as mutual exclusion, `exclusion, randomized consensus, and multiwriter multireader atomic registers. Solutions to these problems all use an "unbounded number" based concurrent timestamp system (uctss), a construction which is as simple to use as it is to understand. A bounded "blackbox" replacement of uctss would imply equally simple bounded solutions to most of these extensively researched problems. Unfortunately, while all know applications use uctss, all existing solution algorithms are only proven to implement the DolevShavit ctss axioms, which have been widely criticized as "hardtouse." While it is easy to show that a uctss implements the ctss axioms, there is no proof that a system meeting the ctss axioms implements uctss. Thus, the pro...
A Transformation of SelfStabilizing Serial Model Programs for Asynchronous Parallel Computing Environments
, 1998
"... In 1974, Dijkstra presented the notion of selfstabilization in the context of distributed computing[5]. A system is selfstabilizing (SS) with respect to a set of legitimate states if regardless of its initial state, the system is guaranteed to arrive at a legitimate state in a finite number of ex ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
(Show Context)
In 1974, Dijkstra presented the notion of selfstabilization in the context of distributed computing[5]. A system is selfstabilizing (SS) with respect to a set of legitimate states if regardless of its initial state, the system is guaranteed to arrive at a legitimate state in a finite number of execution steps and will never leave legitimate states after that. Thus, an SS system need not be initialized and is able to recover from transient failures by itself. Many SS programs have been developed for different models with various assumptions about their execution environments. These assumptions include the semantics of concurrency and communication primitives. Among these models, a serial model (Cdaemon model) has the strongest assumptions. In the serial model, an atomic execution step consists of (1) a read substep, which reads the states of its neighbor processes; followed by (2) a write substep, which modifies its own state (based on the neighbors ' current states and its own state). The communication and concurrency semantics are such that each process can always see the current states of its neighbors, and only one process at a time executes an atomic step.
Optimal MultiWriter MultiReader Atomic Register
 In Proceedings of the 11th ACM Symposium on Principles of Distributed Computing
, 1992
"... . This paper addresses the wide gap in space complexity of atomic, multiwriter, multireader register implementations. While the space complexity of all previous implementations is linear, the lower bounds are logarithmic. We present two implementations which close this gap: The first implementation ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
. This paper addresses the wide gap in space complexity of atomic, multiwriter, multireader register implementations. While the space complexity of all previous implementations is linear, the lower bounds are logarithmic. We present two implementations which close this gap: The first implementation uses multireader physical registers while the second uses singlereader physical registers. Both implementations are optimal with respect to the two most important complexity criteria: Their space complexity is logarithmic and their time complexity is linear. 1991 Mathematics Subject Classification: 68M10, 68Q22, 68Q25. CR Categories: B.3.2, B.4.3, D.4.1, D.4.4. Keywords and Phrases: Shared Register, Concurrent Reading and Writind, Atomicity, Multiwriter Register. Note: This work is partially supported by NWO through NFI Project ALADDIN under Contract number NF 62376. A preliminary version of this paper was presented in the 11th Annual ACM Symposium on Principles of Distributed Computing, August 1992, Vancouver, Canada. 1