Results 1 
7 of
7
The PCL Theorem. Transactions cannot be Parallel, Consistent and Live
 In SPAA
, 2014
"... We show that it is impossible to design a transactional memory system which ensures parallelism, i.e. transactions do not need to synchronize unless they access the same application objects, while ensuring very little consistency, i.e. a consistency condition, called weak adaptive consistency, in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We show that it is impossible to design a transactional memory system which ensures parallelism, i.e. transactions do not need to synchronize unless they access the same application objects, while ensuring very little consistency, i.e. a consistency condition, called weak adaptive consistency, introduced here and which is weaker than snapshot isolation, processor consistency, and any other consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and very little liveness, i.e. that transactions eventually commit if they run solo. Categories and Subject Descriptors
includes a review of WTTM The Fourth Workshop on the Theory of Transactional Memory.
"... As usual, I conclude the year with an annual review of distributed computing awards and conferences. I begin by reporting on two prestigious awards the Dijkstra Prize and the Principles of Distributed Computing Doctoral Dissertation Award. I then proceed with reviews of the main two distributed com ..."
Abstract
 Add to MetaCart
(Show Context)
As usual, I conclude the year with an annual review of distributed computing awards and conferences. I begin by reporting on two prestigious awards the Dijkstra Prize and the Principles of Distributed Computing Doctoral Dissertation Award. I then proceed with reviews of the main two distributed computing conferences, PODC – the ACM Symposium on Principles of Distributed
Practical Nonblocking Unordered Lists
"... This paper introduces new lockfree and waitfree unordered linked list algorithms. The composition of these algorithms according to the fastpathslowpath methodology, a recently devised approach to creating fast waitfree data structures, is nontrivial, suggesting limitations to the applicability ..."
Abstract
 Add to MetaCart
(Show Context)
This paper introduces new lockfree and waitfree unordered linked list algorithms. The composition of these algorithms according to the fastpathslowpath methodology, a recently devised approach to creating fast waitfree data structures, is nontrivial, suggesting limitations to the applicability of the fastpathslowpath methodology. The list algorithms introduced in this paper are shown to scale well across a variety of benchmarks, making them suitable for use both as standalone lists, and as the foundation for waitfree stacks and nonresizable hash tables. 1
DisjointAccess Parallelism: Impossibility, Possibility, and Cost of Transactional Memory Implementations
"... DisjointAccess Parallelism (DAP) is considered one of the most desirable properties to maximize the scalability of Transactional Memory (TM). This paper investigates the possibility and inherent cost of implementing a DAP TM that ensures two properties that are regarded as important to maximize e ..."
Abstract
 Add to MetaCart
(Show Context)
DisjointAccess Parallelism (DAP) is considered one of the most desirable properties to maximize the scalability of Transactional Memory (TM). This paper investigates the possibility and inherent cost of implementing a DAP TM that ensures two properties that are regarded as important to maximize efficiency in readdominated workloads, namely having invisible and waitfree readonly transactions. We first prove that relaxing RealTime Order (RTO) is necessary to implement such a TM. This result motivates us to introduce Witnessable RealTime Order (WRTO), a weaker variant of RTO that demands enforcing RTO only between directly conflicting transactions. Then we show that adopting WRTO makes it possible to design a strictly DAP TM with invisible and waitfree readonly transactions, while preserving strong progressiveness for write transactions and an isolation level known in literature as Extended Update Serializability. Finally, we shed light on the inherent inefficiency of DAP TM implementations that have invisible and waitfree readonly transactions, by establishing lower bounds on the time and space complexity of such TMs.
DisjointAccess Parallelism Does Not Entail Scalability
"... Abstract. Disjoint Access Parallelism (DAP) stipulates that operations involving disjoint sets of memory words must be able to progress independently, without interfering with each other. In this work we argue towards revising the two decade old wisdom saying that DAP is a binary condition that sp ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Disjoint Access Parallelism (DAP) stipulates that operations involving disjoint sets of memory words must be able to progress independently, without interfering with each other. In this work we argue towards revising the two decade old wisdom saying that DAP is a binary condition that splits concurrent programs into scalable and nonscalable. We first present situations where DAP algorithms scale poorly, thus showing that not even algorithms that achieve this property provide scalability under all circumstances. Next, we show that algorithms which violate DAP can sometimes achieve the same scalability and performance as their DAP counterparts. We continue to show how by violating DAP and without sacrificing scalability we are able to circumvent three theoretical results showing that DAP is incompatible with other desirable properties of concurrent programs. Finally we introduce a new property called generalized disjointaccess parallelism (GDAP) which estimates how much of an algorithm is DAP. Algorithms having a large DAP part scale similar to DAP algorithms while not being subject to the same impossibility results. 1
SafetyLiveness Exclusion in Distributed Computing
"... The history of distributed computing is full of tradeoffs between safety and liveness. For instance, one of the most celebrated results in the field, namely the impossibility of consensus in an asynchronous system basically says that we cannot devise an algorithm that deterministically ensures cons ..."
Abstract
 Add to MetaCart
(Show Context)
The history of distributed computing is full of tradeoffs between safety and liveness. For instance, one of the most celebrated results in the field, namely the impossibility of consensus in an asynchronous system basically says that we cannot devise an algorithm that deterministically ensures consensus agreement and validity (i.e., safety) on the one hand, and consensus waitfreedom (i.e., liveness) on the other hand. The motivation of this work is to study the extent to which safety and liveness properties inherently exclude each other. More specifically, we ask, given any safety property S, whether we can determine the strongest (resp. weakest) liveness property that can (resp. cannot) be achieved with S. We show that, maybe surprisingly, the answers to these safetyliveness exclusion questions are in general negative. This has several ramifications in various distributed computing contexts. In the context of consensus for example, this means that it is impossible to determine the strongest (resp. the weakest) liveness property that can (resp. cannot) be ensured with linearizability. However, we present a way to circumvent these impossibilities and answer positively the safetyliveness question by considering a restricted form of liveness. We consider a definition that gathers generalized forms of obstructionfreedom and lockfreedom while enabling to determine the strongest (resp. weakest) liveness property that can (resp. cannot) be implemented in the context of consensus and transactional memory. 1
Distributed Universality: ContentionAwareness, Waitfreedom, Object Progress, and Other Properties
, 2014
"... Abstract: A notion of a universal construction suited to distributed computing has been introduced by M. Herlihy in his celebrated paper “Waitfree synchronization ” (ACM TOPLAS, 1991). A universal construction is an algorithm that can be used to waitfree implement any object defined by a sequentia ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: A notion of a universal construction suited to distributed computing has been introduced by M. Herlihy in his celebrated paper “Waitfree synchronization ” (ACM TOPLAS, 1991). A universal construction is an algorithm that can be used to waitfree implement any object defined by a sequential specification. Herlihy’s paper shows that the basic system model, which supports only atomic read/write registers, has to be enriched with consensus objects to allow the design of universal constructions. The generalized notion of a kuniversal construction has been recently introduced by Gafni and Guerraoui (CONCUR, 2011). A kuniversal construction is an algorithm that can be used to simultaneously implement k objects (instead of just one object), with the guarantee that at least one of the k constructed objects progresses forever. While Herlihy’s universal construction relies on atomic registers and consensus objects, a kuniversal construction relies on atomic registers and ksimultaneous consensus objects (which have been shown to be computationally equivalent to kset agreement objects in the read/write system model where any number of processes may crash). This paper significantly extends the universality results introduced by Herlihy and GafniGuerraoui. In particular, we present a kuniversal construction which satisfies the following five desired properties, which are not satisfied by the previous kuniversal construction: (1) among the k objects that are constructed, at least ℓ objects (and not just one) are guaranteed to progress forever; (2) the progress condition for processes is waitfreedom, which means that each correct process executes an infinite number of operations on each object that progresses forever; (3) if one of the k constructed objects stops progressing, it stops in the same state