Results 1  10
of
15
How to Share Concurrent WaitFree Variables
, 1995
"... Sharing data between multiple asynchronous userseach of which can atomically read and write the datais a feature which may help to increase the amount of parallelism in distributed systems. An algorithm implementing this feature is presented. The main construction of an nuser atomic variable ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
Sharing data between multiple asynchronous userseach of which can atomically read and write the datais a feature which may help to increase the amount of parallelism in distributed systems. An algorithm implementing this feature is presented. The main construction of an nuser atomic variable directly from singlewriter, singlereader atomic variables uses O(n) control bits and O(n) accesses per Read/Write running in O(1) parallel time.
TimeLapse Snapshots
 Proceedings of Israel Symposium on the Theory of Computing and Systems
, 1994
"... A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial decrease in ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial decrease in conceptual complexity is quite often counterbalanced by an increase in computational complexity. In this paper, we introduce the notion of a weak snapshot scan, a slightly weaker primitive that has a more efficient implementation. We propose the following methodology for using this abstraction: first, design and verify an algorithm using the more powerful snapshot scan, and second, replace the more powerful but less efficient snapshot with the weaker but more efficient snapshot, and show that the weaker abstraction nevertheless suffices to ensure the correctness of the enclosing algorithm. We give two examples of algorithms whose performance can be enhanced while retaining a simple m...
Spreading Rumors Rapidly Despite an Adversary
 J. ALGORITHMS
, 1998
"... In the collect problem [32], n processors in a sharedmemory system must each learn the values of n registers. We give a randomized algorithm that solves the collect problem in O(n log 3 n) total read and write operations with high probability, even if timing is under the control of a content ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
In the collect problem [32], n processors in a sharedmemory system must each learn the values of n registers. We give a randomized algorithm that solves the collect problem in O(n log 3 n) total read and write operations with high probability, even if timing is under the control of a contentoblivious adversary (a slight weakening of the usual adaptive adversary). This improves on both the trivial upper bound of O(n 2 ) steps and the best previously known bound of O(n 3=2 log n) steps, and is close to the lower bound of \Omega\Gamma n log n) steps. Furthermore, we show how this algorithm can be used to obtain a multiuse cooperative collect protocol that is O(log 3 n)competitive in the latency model of Ajtai et al.[3] and O(n 1=2 log 3=2 n)competitive in the throughput model of Aspnes and Waarts [10]; in both cases the competitive ratios are within a polylogarithmic factor of optimal.
Bounded Concurrent Timestamp Systems Using Vector Clocks
 J. ACM
, 2002
"... Shared registers are basic objects used as communication mediums in asynchronous concurrent computation. A concurrent timestamp system is a higher typed communication object, and has been shown to be a powerful tool to solve many concurrency control problems. It has turned out to be possible to cons ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Shared registers are basic objects used as communication mediums in asynchronous concurrent computation. A concurrent timestamp system is a higher typed communication object, and has been shown to be a powerful tool to solve many concurrency control problems. It has turned out to be possible to construct such higher typed objects from primitive lower typed ones. The next step is to find efficient constructions. We propose a very efficient waitfree construction of bounded concurrent timestamp systems from 1writer shared registers. This finalizes, corrects, and extends a preliminary bounded multiwriter construction proposed by the second author in 1986. That work partially initiated the current interest in waitfree concurrent objects, and introduced a notion of discrete vector clocks in distributed algorithms.
Modular Competitiveness for Distributed Algorithms
 In Proc. 28th ACM Symp. on Theory of Computing (STOC
, 2000
"... We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks that start at specified times. An important property of the throughput measure is that it is modular: we define a notion of relative competitiveness with the property that a krelatively competitive implementation of an object T using a subroutine U , combined with an lcompetitive implementation of U , gives a klcompetitive algorithm for ...
Efficient Bounded Timestamping Using Traceable Use Abstraction  Is Writer's Guessing Better Than Reader's
, 1993
"... Traceable use is a helpful abstraction to recycling values in bounded waitfree systems. Several researchers have demonstrated the power of the traceable use abstraction in constructing concurrent timestamping systems, snapshot variables, bounded round numbers. In this paper, we present an efficient ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Traceable use is a helpful abstraction to recycling values in bounded waitfree systems. Several researchers have demonstrated the power of the traceable use abstraction in constructing concurrent timestamping systems, snapshot variables, bounded round numbers. In this paper, we present an efficient implementation technique of the traceable use abstraction, which is finally used in developing a new construction of concurrent timestamping systems. This new construction is much simpler, in fact better, than the other traceable use abstraction based construction in the literature. The new implementation exhibits that sometimes writer's guessing is better than reader's explicit telling.
A Bounded FirstIn, FirstEnabled Solution to the lExclusion Problem
 ACM Transactions on Programming Languages and Systems
, 1990
"... This paper presents a solution to the firstcome, firstenabled `exclusion problem of [?]. Unlike the solution in [?], this solution does not use powerful readmodifywrite synchronization primitives, and requires only bounded shared memory. Use of the concurrent timestamp system of [?] is key in s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper presents a solution to the firstcome, firstenabled `exclusion problem of [?]. Unlike the solution in [?], this solution does not use powerful readmodifywrite synchronization primitives, and requires only bounded shared memory. Use of the concurrent timestamp system of [?] is key in solving the problem within bounded shared memory. Categories and Subject Descriptors: D.4.1 [Operating Systems]: Process ManagementMutual
The space complexity of unbounded timestamps
 IN: PROC. 21ST INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING
, 2007
"... The timestamp problem captures a fundamental aspect of asynchronous distributed computing. It allows processes to label events throughout the system with timestamps that provide information about the realtime ordering of those events. We consider the space complexity of waitfree implementations o ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The timestamp problem captures a fundamental aspect of asynchronous distributed computing. It allows processes to label events throughout the system with timestamps that provide information about the realtime ordering of those events. We consider the space complexity of waitfree implementations of timestamps from shared readwrite registers in a system of n processes. We prove an Ω ( √ n) lower bound on the number of registers required. If the timestamps are elements of a nowhere dense set, for example the integers, we prove a stronger, and tight, lower bound of n. However, if timestamps are not from a nowhere dense set, this bound can be beaten; we give an algorithm that uses n − 1 (singlewriter) registers. We also consider the special case of anonymous algorithms, where processes do not have unique identifiers. We prove anonymous timestamp algorithms require n registers. We give an algorithm to prove that this lower bound is tight. This is the first anonymous algorithm that uses a finite number of registers. Although this algorithm is waitfree, its step complexity is not bounded. We also present an algorithm that uses O(n 2) registers and has bounded step complexity.
SelfStabilizing Timestamps
 Theoretical Computer Science
, 2001
"... The problem of implementing selfstabilizing timestamps with bounded values is investigated and a solution is found which is applied to the ` exclusion problem and to the Multiwriter Atomic Register problem. Thus we get selfstabilizing solutions to these two wellknown problems. A new type of ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The problem of implementing selfstabilizing timestamps with bounded values is investigated and a solution is found which is applied to the ` exclusion problem and to the Multiwriter Atomic Register problem. Thus we get selfstabilizing solutions to these two wellknown problems. A new type of weak timestamps is identied here, and some evidence is brought to show its usefulness. 1 Preface Messages are often timestamped. In a fax, the timestamp includes the date and exact time of the day, and in a book only the publication year, but in all cases this information guides the reader in choosing and processing the data. The antiquarian may choose the oldest book, and the student the newest edition, but in the general timestamp protocol the reader (called scanner) returns all the messages in their issuing order. 1 Timestamps may appear in conjunction with messages (\timestamped messages "), but timestamps may also appear alone in pure form, for example as numbers distributed to customers waiting for a certain service. We shall deal here with timestamped messages which are clearly more general, since by setting their data eld to the null value pure timestamps can be derived. Two well known problems will accompany our discussion: the `exclusion problem to illustrate pure timestamps, and the multiplewriter atomic register problem to illustrate timestamped messages. Another distinction is between unbounded timestamps (such as the natural numbers) and bounded timestamps which should achieve the same eect but with a bounded set of values. The classical Bakery Algorithm of Lamport (a 1 That the use of timestamps is old and natural is illustrated by ancient ostracons (about 800 BC) from Sumeria which show how oerings of vine and oil were marked by date, place of ori...
A Modular Measure of Competitive Performance for Distributed Algorithms
, 1995
"... We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai, Aspnes, Dwork, and Waarts [4], which measures how quickly an ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai, Aspnes, Dwork, and Waarts [4], which measures how quickly an algorithm can finish tasks that start at specified times. An advantage of the throughput measure is that it is modular: we define a notion of relative competitiveness with the property that a krelatively competitive implementation of an object T using a subroutine U , combined with an lcompetitive implementation of U , gives a klcompetitive algorithm for T . We prove the throughputcompetitiveness of an algorithm for a fundamental building block of many wellknown distributed algorithms. This permits a straightforward construction of competitive versions of these algorithms; to our knowledge these are the first examples of algorithms obtained through a general method for modular construc...