Results 1  10
of
19
How to Share Concurrent WaitFree Variables
, 1995
"... Sharing data between multiple asynchronous userseach of which can atomically read and write the datais a feature which may help to increase the amount of parallelism in distributed systems. An algorithm implementing this feature is presented. The main construction of an nuser atomic variable ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
Sharing data between multiple asynchronous userseach of which can atomically read and write the datais a feature which may help to increase the amount of parallelism in distributed systems. An algorithm implementing this feature is presented. The main construction of an nuser atomic variable directly from singlewriter, singlereader atomic variables uses O(n) control bits and O(n) accesses per Read/Write running in O(1) parallel time.
TimeLapse Snapshots
 Proceedings of Israel Symposium on the Theory of Computing and Systems
, 1994
"... A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial decrease in ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
A snapshot scan algorithm takes an "instantaneous" picture of a region of shared memory that may be updated by concurrent processes. Many complex shared memory algorithms can be greatly simplified by structuring them around the snapshot scan abstraction. Unfortunately, the substantial decrease in conceptual complexity is quite often counterbalanced by an increase in computational complexity. In this paper, we introduce the notion of a weak snapshot scan, a slightly weaker primitive that has a more efficient implementation. We propose the following methodology for using this abstraction: first, design and verify an algorithm using the more powerful snapshot scan, and second, replace the more powerful but less efficient snapshot with the weaker but more efficient snapshot, and show that the weaker abstraction nevertheless suffices to ensure the correctness of the enclosing algorithm. We give two examples of algorithms whose performance can be enhanced while retaining a simple m...
Bounded Concurrent TimeStamping
 SIAM JOURNAL ON COMPUTING
, 1997
"... We introduce concurrent timestamping, a paradigm that allows processes to temporally order concurrent events in an asynchronous sharedmemory system. Concurrent timestamp systems are powerful tools for concurrency control, serving as the basis for solutions to coordination problems such as mutual ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We introduce concurrent timestamping, a paradigm that allows processes to temporally order concurrent events in an asynchronous sharedmemory system. Concurrent timestamp systems are powerful tools for concurrency control, serving as the basis for solutions to coordination problems such as mutual exclusion, lexclusion, randomized consensus, and multiwriter multireader atomic registers. Unfortunately, all previously known methods for implementing concurrent timestamp systems have been theoretically unsatisfying since they require unboundedsize timestamps  in other words, unboundedsize memory. This work presents the first bounded implementation of a concurrent timestamp system, providing a modular unboundedtobounded transformation of the simple unbounded solutions to problems such as those mentioned above. It allows solutions to two formerly open problems, the boundedprobabilistic consensus problem of Abrahamson and the fifolexclusion problem of Fischer, Lynch, Burns and...
Spreading Rumors Rapidly Despite an Adversary
 J. ALGORITHMS
, 1998
"... In the collect problem [32], n processors in a sharedmemory system must each learn the values of n registers. We give a randomized algorithm that solves the collect problem in O(n log 3 n) total read and write operations with high probability, even if timing is under the control of a content ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
In the collect problem [32], n processors in a sharedmemory system must each learn the values of n registers. We give a randomized algorithm that solves the collect problem in O(n log 3 n) total read and write operations with high probability, even if timing is under the control of a contentoblivious adversary (a slight weakening of the usual adaptive adversary). This improves on both the trivial upper bound of O(n 2 ) steps and the best previously known bound of O(n 3=2 log n) steps, and is close to the lower bound of \Omega\Gamma n log n) steps. Furthermore, we show how this algorithm can be used to obtain a multiuse cooperative collect protocol that is O(log 3 n)competitive in the latency model of Ajtai et al.[3] and O(n 1=2 log 3=2 n)competitive in the throughput model of Aspnes and Waarts [10]; in both cases the competitive ratios are within a polylogarithmic factor of optimal.
Bounded Concurrent Timestamp Systems Using Vector Clocks
 J. ACM
, 2002
"... Shared registers are basic objects used as communication mediums in asynchronous concurrent computation. A concurrent timestamp system is a higher typed communication object, and has been shown to be a powerful tool to solve many concurrency control problems. It has turned out to be possible to cons ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Shared registers are basic objects used as communication mediums in asynchronous concurrent computation. A concurrent timestamp system is a higher typed communication object, and has been shown to be a powerful tool to solve many concurrency control problems. It has turned out to be possible to construct such higher typed objects from primitive lower typed ones. The next step is to find efficient constructions. We propose a very efficient waitfree construction of bounded concurrent timestamp systems from 1writer shared registers. This finalizes, corrects, and extends a preliminary bounded multiwriter construction proposed by the second author in 1986. That work partially initiated the current interest in waitfree concurrent objects, and introduced a notion of discrete vector clocks in distributed algorithms.
Modular Competitiveness for Distributed Algorithms
 In Proc. 28th ACM Symp. on Theory of Computing (STOC
, 2000
"... We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks that start at specified times. An important property of the throughput measure is that it is modular: we define a notion of relative competitiveness with the property that a krelatively competitive implementation of an object T using a subroutine U , combined with an lcompetitive implementation of U , gives a klcompetitive algorithm for ...
Using Simulation Techniques to Prove Timing Properties
, 1995
"... This thesis presents a methodology based on simulations and invariants for proving timing properties of realtime, distributed systems. This methodology is used to prove tight time bounds for two systems, a leader election protocol for a ring of processes, and Fischer's timingbased mutual exclusion ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This thesis presents a methodology based on simulations and invariants for proving timing properties of realtime, distributed systems. This methodology is used to prove tight time bounds for two systems, a leader election protocol for a ring of processes, and Fischer's timingbased mutual exclusion algorithm. A framework for verifying these proofs using the Larch tools is also developed, and the proof for Fischer's algorithm is checked within this framework. Many formal methods have been developed for proving the correctness of untimed distributed systems. However, realtime systems often have subtle timing dependencies that are difficult to analyze and reason about. Furthermore, for many realtime systems, correctness is insufficient; it is important to satisfy certain performance requirements. It is necessary, therefore, to extend the formal models and techniques to the timed setting. We use a timed automaton model, together with simulations which establish that one automaton impl...
Towards a practical snapshot algorithm
 Theoretical Computer Science
, 1995
"... Abrtraci An atomic rnaprhoi memory is an implementation of a multiple location shared memory that can be atomidly read in its entirety without having to prevent concurrent writing. The design of waitfree implementations of atomic ruaprht memoner has been the subject of extensive theoretical res ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abrtraci An atomic rnaprhoi memory is an implementation of a multiple location shared memory that can be atomidly read in its entirety without having to prevent concurrent writing. The design of waitfree implementations of atomic ruaprht memoner has been the subject of extensive theoretical research in recent years. This paper introducem the coordinatedcolleci algorithm, a novel waitfree atomic 8napshot construction which we believe b a flrst step in taking snapshots from theory to practice. Unlike former algorithms, it uses currently available multiproceasor syncbronuation operations to provide an algorithm that has only 0(1) update complexity and O(n) scan complexity, with very small constants. Empirical evidence collected on a simulated dmtributed sharedmemory multiprocessor shows that coordinatedcollect outperforms all known waitfree, lockfree, and locking algorithms in terms of overall throughput and latency.
SpaceOptimal MultiWriter Snapshot Objects Are Slow
 In Proceedings of the 21st Annual ACM Symposium on Principles of Distributed Computing
, 2002
"... We consider the problem of waitfree implementation of a multiwriter snapshot object with m >= 2 components shared by n > m processes. It is known that this can be done using m multiwriter registers. We give a matching lower bound, slightly improving the previous space lower bound. The main focus ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We consider the problem of waitfree implementation of a multiwriter snapshot object with m >= 2 components shared by n > m processes. It is known that this can be done using m multiwriter registers. We give a matching lower bound, slightly improving the previous space lower bound. The main focus of the paper, however, is on time complexity. The best known upper bound on the number of steps a process has to take to perform one operation of the snapshot is O(n). When m is much smaller than n, an implementation whose time complexity is a function of m rather than n would be better. We show that this cannot be achieved for any spaceoptimal implementation: We prove that \Omega\Gamma n) steps are required to perform a SCAN operation in the worst case, even if m = 2. This significantly improves previous \Omega\Gammavio (m; n)) lower bounds. Our proof also yields insight into the structure of any spaceoptimal implementation, showing that processes simulating the snapshot operations must access the registers in a very constrained way.
A Tight Time Lower Bound for SpaceOptimal Implementations of MultiWriter Snapshots
 In Proceedings of the 35th ACM Symposium on Theory of Computing
, 2003
"... A snapshot object consists of a collection of m > 1 components, each capable of storing a value, shared by n processes in an asynchronous sharedmemory distributed system. It supports two operations: a process can UPDATE any individual component or atomically SCAN the entire collection to obtain the ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
A snapshot object consists of a collection of m > 1 components, each capable of storing a value, shared by n processes in an asynchronous sharedmemory distributed system. It supports two operations: a process can UPDATE any individual component or atomically SCAN the entire collection to obtain the values of all the components. It is possible to implement a snapshot object using m registers so that each operation takes O(mn) time.