Results 1  10
of
13
Concurrent Timestamping Made Simple
 Proceedings of Israel Symposium on Theory of Computing and Systems
, 1995
"... Concurrent Timestamp Systems (ctss) allow processes to temporally order concurrent events in an asynchronous shared memory system, a powerful tool for concurrency control, serving as the basis for solutions to coordination problems such as mutual exclusion, `exclusion, randomized consensus, and m ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Concurrent Timestamp Systems (ctss) allow processes to temporally order concurrent events in an asynchronous shared memory system, a powerful tool for concurrency control, serving as the basis for solutions to coordination problems such as mutual exclusion, `exclusion, randomized consensus, and multiwriter multireader atomic registers. Solutions to these problems all use an "unbounded number" based concurrent timestamp system (uctss), a construction which is as simple to use as it is to understand. A bounded "blackbox" replacement of uctss would imply equally simple bounded solutions to most of these extensively researched problems. Unfortunately, while all know applications use uctss, all existing solution algorithms are only proven to implement the DolevShavit ctss axioms, which have been widely criticized as "hardtouse." While it is easy to show that a uctss implements the ctss axioms, there is no proof that a system meeting the ctss axioms implements uctss. Thus, the pro...
Workcompetitive scheduling for cooperative computing with dynamic groups
 SIAM JOURNAL ON COMPUTING
, 2005
"... The problem of cooperatively performing a set of t tasks in a decentralized computing environment subject to failures is one of the fundamental problems in distributed computing. The setting with partitionable networks is especially challenging, as algorithmic solutions must accommodate the possib ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
The problem of cooperatively performing a set of t tasks in a decentralized computing environment subject to failures is one of the fundamental problems in distributed computing. The setting with partitionable networks is especially challenging, as algorithmic solutions must accommodate the possibility that groups of processors become disconnected (and, perhaps, reconnected) during the computation. The efficiency of taskperforming algorithms is often assessed in terms of work: the total number of tasks, counting multiplicities, performed by all of the processors during the computation. In general, the scenario where the processors are partitioned into g disconnected components causes any taskperforming algorithm to have work Ω(t · g) even if each group of processors performs no more than the optimal number of Θ(t) tasks. Given that such pessimistic lower bounds apply to any scheduling algorithm, we pursue a competitive analysis. Specifically, this paper studies a simple randomized scheduling algorithm for p asynchronous processors, connected by a dynamically changing communication medium, to complete t known tasks. The performance of this algorithm is compared against that of an omniscient offline algorithm with full knowledge of the future changes in the communication medium. The paper describes a notion of computation width, which associates a natural number with a history of changes in the communication medium, and shows both upper and lower bounds on workcompetitiveness in terms of this quantity. Specifically, it is shown that the simple randomized algorithm obtains the competitive ratio (1 + cw/e), where cw is the computation width and e is the base of the natural logarithm (e =2.7182...); this competitive ratio is then shown to be tight.
Towards a practical snapshot algorithm
 Theoretical Computer Science
, 1995
"... Abrtraci An atomic rnaprhoi memory is an implementation of a multiple location shared memory that can be atomidly read in its entirety without having to prevent concurrent writing. The design of waitfree implementations of atomic ruaprht memoner has been the subject of extensive theoretical res ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abrtraci An atomic rnaprhoi memory is an implementation of a multiple location shared memory that can be atomidly read in its entirety without having to prevent concurrent writing. The design of waitfree implementations of atomic ruaprht memoner has been the subject of extensive theoretical research in recent years. This paper introducem the coordinatedcolleci algorithm, a novel waitfree atomic 8napshot construction which we believe b a flrst step in taking snapshots from theory to practice. Unlike former algorithms, it uses currently available multiproceasor syncbronuation operations to provide an algorithm that has only 0(1) update complexity and O(n) scan complexity, with very small constants. Empirical evidence collected on a simulated dmtributed sharedmemory multiprocessor shows that coordinatedcollect outperforms all known waitfree, lockfree, and locking algorithms in terms of overall throughput and latency.
SelfStabilizing lExclusion
, 2001
"... Our work presents a selfstabilizing solution to the lexclusion problem. This problem is a wellknown generalization of the mutualexclusion problem in which up to l, but never more than l, processes are allowed simultaneously in their critical sections. Selfstabilization means that even when trans ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Our work presents a selfstabilizing solution to the lexclusion problem. This problem is a wellknown generalization of the mutualexclusion problem in which up to l, but never more than l, processes are allowed simultaneously in their critical sections. Selfstabilization means that even when transient failures occur and some processes crash, the system finally resumes its regular and correct behavior. The model of communication assumed here is that of shared memory, in which processes use singlewriter multiplereader regular registers.
ComputerAssisted Verification of an Algorithm for Concurrent Timestamps
 Formal Description Techniques IX: Theory, Applications, and Tools (FORTE/PSTV'96: Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols, and Protocol Specification, Testing, and Verification
, 1996
"... A formal representation and machinechecked proof are given for the Bounded Concurrent Timestamp (BCTS) algorithm of Dolev and Shavit. The proof uses invariant assertions and a forward simulation mapping to a corresponding Unbounded Concurrent Timestamp (UCTS) algorithm, following a strategy develop ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
A formal representation and machinechecked proof are given for the Bounded Concurrent Timestamp (BCTS) algorithm of Dolev and Shavit. The proof uses invariant assertions and a forward simulation mapping to a corresponding Unbounded Concurrent Timestamp (UCTS) algorithm, following a strategy developed by Gawlick, Lynch, and Shavit. The proof was produced interactively, using the Larch Prover. Keywords Verification, validation and testing; tools and tool support; Larch; input/output automata; concurrent timestamps 1 INTRODUCTION In this paper, we describe a computerassisted verification, using the Larch Prover (Garland and Guttag, 1991), of one of the most complicated algorithms in the distributed systems theory literature: the Bounded Concurrent Timestamp (BCTS) algorithm of Dolev and Shavit (1989). This algorithm runs in the singlewriter, multireader, read/write shared memory model. The verified algorithm is a slight simplification, due to Gawlick, Lynch, and Shavit (1992), of t...
Collective asynchronous reading with polylogarithmic worstcase overhead
 in Proceedings, 36th ACM Symposium on Theory of Computing (STOC), 2004
"... The Collect problem for an asynchronous sharedmemory system has the objective for the processors to learn all values of a collection of shared registers, while minimizing the total number of read and write operations. First abstracted by Saks, Shavit, and Woll [37], Collect is among the standard pr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The Collect problem for an asynchronous sharedmemory system has the objective for the processors to learn all values of a collection of shared registers, while minimizing the total number of read and write operations. First abstracted by Saks, Shavit, and Woll [37], Collect is among the standard problems in distributed computing, The model consists of n asynchronous processes, each with a singlewriter multireader register of a polynomial capacity. The best previously known deterministic solution performs O(n 3/2 log n) reads and writes, and it is due to Ajtai, Aspnes, Dwork, and Waarts [3]. This paper presents a new deterministic algorithm that performs O(n log 7 n) read/write operations, thus substantially improving the best previous upper bound. Using an approach based on epidemic rumorspreading, the novelty of the new algorithm is in using a family of expander graphs and ensuring
Cisco Systems
"... Abstract. We present a formal specification and analysis of a faulttolerant DHCP algorithm, used to automatically configure certain host parameters in an IP network. Our algorithm uses ideas from an algorithm presented in [5], but is considerably simpler and at the same time more structured and rigo ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We present a formal specification and analysis of a faulttolerant DHCP algorithm, used to automatically configure certain host parameters in an IP network. Our algorithm uses ideas from an algorithm presented in [5], but is considerably simpler and at the same time more structured and rigorous. We specify the assumptions and behavior of our algorithm as traces of Timed Input/Output Automata, and prove its correctness using this formalism. Our algorithm is based on a composition of independent subalgorithms solving variants of the classical leader election and shared register problems in distributed computing. The modularity of our algorithm facilitates its understanding and analysis, and can also aid in optimizing the algorithm or proving lower bounds. Our work demonstrates that formal methods can be feasibly applied to complex realworld problems to improve and simplify their solutions. 1
Checking Verifications of Protocols and Distributed Systems By Computer
, 1998
"... We provide a treatise about checking proofs of distributed systems by computer using general purpose proof checkers. In particular, we present two approaches to verifying and checking the verification of the Sequential Line Interface Protocol (SLIP), one using rewriting techniques and one using the ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We provide a treatise about checking proofs of distributed systems by computer using general purpose proof checkers. In particular, we present two approaches to verifying and checking the verification of the Sequential Line Interface Protocol (SLIP), one using rewriting techniques and one using the socalled cones and foci theorem. Both verifications are carried out in the setting of process algebra. Finally, we present an overview of literature containing checked proofs. Note: The research of the second author is supported by Human Capital Mobility (HCM). 1 Proof checkers Anyone trying to use a proof checker, e.g. Isabelle [67, 68], HOL [29], Coq [20], PVS [78], BoyerMoore [14] or many others that exist today has experienced the same frustration. It is very difficult to prove even the simplest theorem. In the first place it is difficult to get acquainted to the logical language of the system. Most systems employ higher order logics that are extremely versatile and expressive. Howev...
Tight Space Selfstabilizing Uniform lMutual Exclusion
, 2000
"... A selfstabilizing algorithm, regardless of the initial system state, converges in nite time to ..."
Abstract
 Add to MetaCart
A selfstabilizing algorithm, regardless of the initial system state, converges in nite time to
The Mailbox Problem (Extended Abstract)
"... Abstract. We propose and solve a synchronization problem called the mailbox problem, motivated by the interaction between devices and processor in a computer. In this problem, a postman delivers letters to the mailbox of a housewife and uses a flag to signal a nonempty mailbox. The wife must remove ..."
Abstract
 Add to MetaCart
Abstract. We propose and solve a synchronization problem called the mailbox problem, motivated by the interaction between devices and processor in a computer. In this problem, a postman delivers letters to the mailbox of a housewife and uses a flag to signal a nonempty mailbox. The wife must remove all letters delivered to the mailbox and should not walk to the mailbox if it is empty. We present algorithms and an impossibility result for this problem. 1