Results 1 -
7 of
7
The PCL Theorem. Transactions cannot be Parallel, Consistent and Live
- In SPAA
, 2014
"... We show that it is impossible to design a transactional mem-ory system which ensures parallelism, i.e. transactions do not need to synchronize unless they access the same appli-cation objects, while ensuring very little consistency, i.e. a consistency condition, called weak adaptive consistency, in- ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
We show that it is impossible to design a transactional mem-ory system which ensures parallelism, i.e. transactions do not need to synchronize unless they access the same appli-cation objects, while ensuring very little consistency, i.e. a consistency condition, called weak adaptive consistency, in-troduced here and which is weaker than snapshot isolation, processor consistency, and any other consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and very little liveness, i.e. that trans-actions eventually commit if they run solo. Categories and Subject Descriptors
includes a review of WTTM- The Fourth Workshop on the Theory of Transactional Memory.
"... As usual, I conclude the year with an annual review of distributed computing awards and conferences. I begin by reporting on two prestigious awards- the Dijkstra Prize and the Principles of Distributed Computing Doctoral Dissertation Award. I then proceed with reviews of the main two distributed com ..."
Abstract
- Add to MetaCart
(Show Context)
As usual, I conclude the year with an annual review of distributed computing awards and conferences. I begin by reporting on two prestigious awards- the Dijkstra Prize and the Principles of Distributed Computing Doctoral Dissertation Award. I then proceed with reviews of the main two distributed computing conferences, PODC – the ACM Symposium on Principles of Distributed
Practical Non-blocking Unordered Lists
"... This paper introduces new lock-free and wait-free unordered linked list algorithms. The composition of these algorithms according to the fast-path-slow-path methodology, a recently devised approach to creating fast wait-free data structures, is nontrivial, suggesting limitations to the applicability ..."
Abstract
- Add to MetaCart
(Show Context)
This paper introduces new lock-free and wait-free unordered linked list algorithms. The composition of these algorithms according to the fast-path-slow-path methodology, a recently devised approach to creating fast wait-free data structures, is nontrivial, suggesting limitations to the applicability of the fast-path-slow-path methodology. The list algorithms introduced in this paper are shown to scale well across a variety of benchmarks, making them suitable for use both as standalone lists, and as the foundation for wait-free stacks and non-resizable hash tables. 1
Disjoint-Access Parallelism: Impossibility, Possibility, and Cost of Transactional Memory Implementations
"... Disjoint-Access Parallelism (DAP) is considered one of the most desirable properties to maximize the scalability of Trans-actional Memory (TM). This paper investigates the possi-bility and inherent cost of implementing a DAP TM that ensures two properties that are regarded as important to maximize e ..."
Abstract
- Add to MetaCart
(Show Context)
Disjoint-Access Parallelism (DAP) is considered one of the most desirable properties to maximize the scalability of Trans-actional Memory (TM). This paper investigates the possi-bility and inherent cost of implementing a DAP TM that ensures two properties that are regarded as important to maximize efficiency in read-dominated workloads, namely having invisible and wait-free read-only transactions. We first prove that relaxing Real-Time Order (RTO) is neces-sary to implement such a TM. This result motivates us to introduce Witnessable Real-Time Order (WRTO), a weaker variant of RTO that demands enforcing RTO only between directly conflicting transactions. Then we show that adopt-ing WRTO makes it possible to design a strictly DAP TM with invisible and wait-free read-only transactions, while preserving strong progressiveness for write transactions and an isolation level known in literature as Extended Update Serializability. Finally, we shed light on the inherent in-efficiency of DAP TM implementations that have invisible and wait-free read-only transactions, by establishing lower bounds on the time and space complexity of such TMs.
Disjoint-Access Parallelism Does Not Entail Scalability
"... Abstract. Disjoint Access Parallelism (DAP) stipulates that operations involving disjoint sets of memory words must be able to progress indepen-dently, without interfering with each other. In this work we argue towards revising the two decade old wisdom saying that DAP is a binary condi-tion that sp ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Disjoint Access Parallelism (DAP) stipulates that operations involving disjoint sets of memory words must be able to progress indepen-dently, without interfering with each other. In this work we argue towards revising the two decade old wisdom saying that DAP is a binary condi-tion that splits concurrent programs into scalable and non-scalable. We first present situations where DAP algorithms scale poorly, thus showing that not even algorithms that achieve this property provide scalability under all circumstances. Next, we show that algorithms which violate DAP can sometimes achieve the same scalability and performance as their DAP counterparts. We continue to show how by violating DAP and without sacrificing scalability we are able to circumvent three the-oretical results showing that DAP is incompatible with other desirable properties of concurrent programs. Finally we introduce a new property called generalized disjoint-access parallelism (GDAP) which estimates how much of an algorithm is DAP. Algorithms having a large DAP part scale similar to DAP algorithms while not being subject to the same impossibility results. 1
Safety-Liveness Exclusion in Distributed Computing
"... The history of distributed computing is full of trade-offs between safety and liveness. For instance, one of the most celebrated results in the field, namely the impossibility of consensus in an asynchronous system basically says that we cannot devise an algorithm that deterministically ensures cons ..."
Abstract
- Add to MetaCart
(Show Context)
The history of distributed computing is full of trade-offs between safety and liveness. For instance, one of the most celebrated results in the field, namely the impossibility of consensus in an asynchronous system basically says that we cannot devise an algorithm that deterministically ensures consensus agree-ment and validity (i.e., safety) on the one hand, and consensus wait-freedom (i.e., liveness) on the other hand. The motivation of this work is to study the extent to which safety and liveness properties inherently exclude each other. More specifically, we ask, given any safety property S, whether we can determine the strongest (resp. weakest) liveness property that can (resp. cannot) be achieved with S. We show that, maybe surprisingly, the answers to these safety-liveness exclusion questions are in general negative. This has several ramifications in various distributed computing contexts. In the context of consensus for exam-ple, this means that it is impossible to determine the strongest (resp. the weakest) liveness property that can (resp. cannot) be ensured with linearizability. However, we present a way to circumvent these impossibilities and answer positively the safety-liveness question by considering a restricted form of liveness. We consider a definition that gathers generalized forms of obstruction-freedom and lock-freedom while enabling to determine the strongest (resp. weakest) liveness property that can (resp. cannot) be implemented in the context of consensus and transactional memory. 1
Distributed Universality: Contention-Awareness, Wait-freedom, Object Progress, and Other Properties
, 2014
"... Abstract: A notion of a universal construction suited to distributed computing has been introduced by M. Herlihy in his celebrated paper “Wait-free synchronization ” (ACM TOPLAS, 1991). A universal construction is an algorithm that can be used to wait-free implement any object defined by a sequentia ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract: A notion of a universal construction suited to distributed computing has been introduced by M. Herlihy in his celebrated paper “Wait-free synchronization ” (ACM TOPLAS, 1991). A universal construction is an algorithm that can be used to wait-free implement any object defined by a sequential specification. Herlihy’s paper shows that the basic system model, which supports only atomic read/write registers, has to be enriched with consensus objects to allow the design of universal constructions. The generalized notion of a k-universal construction has been recently introduced by Gafni and Guerraoui (CONCUR, 2011). A k-universal construction is an algorithm that can be used to simultaneously implement k objects (instead of just one object), with the guarantee that at least one of the k constructed objects progresses forever. While Herlihy’s universal construction relies on atomic registers and consensus objects, a k-universal construction relies on atomic registers and k-simultaneous consensus objects (which have been shown to be computationally equivalent to k-set agreement objects in the read/write system model where any number of processes may crash). This paper significantly extends the universality results introduced by Herlihy and Gafni-Guerraoui. In particular, we present a k-universal construction which satisfies the following five desired properties, which are not satisfied by the previous k-universal construction: (1) among the k objects that are constructed, at least ℓ objects (and not just one) are guaranteed to progress forever; (2) the progress condition for processes is wait-freedom, which means that each correct process executes an infinite number of operations on each object that progresses forever; (3) if one of the k constructed objects stops progressing, it stops in the same state