Results 1  10
of
38
Sharedmemory mutual exclusion: Major research trends since
 Distributed Computing
, 1986
"... * Exclusion: At most one process executes its critical section at any time. ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
* Exclusion: At most one process executes its critical section at any time.
An Adaptive Collect Algorithm with Applications
 Distributed Computing
, 2001
"... In a sharedmemory distributed system, n independent asynchronous processes communicate by reading and writing to shared memory. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
In a sharedmemory distributed system, n independent asynchronous processes communicate by reading and writing to shared memory. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown in advance and may change in different executions of the algorithm. Adaptive algorithms are inherently waitfree, providing faulttolerance in the presence of an arbitrary number of crash failures and different processes' speed. A waitfree adaptive collect algorithm with O(k) step complexity is presented, together with its applications in waitfree adaptive algorithms for atomic snapshots, immediate snapshots and renaming. Keywords: contentionsensitive complexity, waitfree algorithms, asynchronous sharedmemory systems, read/write registers, atomic snapshots, immediate atomic snapshots, renaming. Work supported by the fund for the promotion of research in the Technion. y Department of Computer Science, The Technion, Haifa 32000, Israel. hagit@cs.technion.ac.il. z Department of Computer Science, The Technion, Haifa 32000, Israel. leonf@cs.technion.ac.il. x Computer Science Department, UCLA. eli@cs.ucla.edu. 1
Efficient Adaptive Collect using Randomization
 PROC. OF THE INTL. SYMP. ON DISTRIBUTED COMPUTING (DISC
, 2004
"... An adaptive algorithm, whose step complexity adjusts to the number of active processes, is attractive for distributed systems with a highlyvariable number of processes. The cornerstone of many adaptive algorithms is an adaptive mechanism to collect uptodate information from all participating p ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
An adaptive algorithm, whose step complexity adjusts to the number of active processes, is attractive for distributed systems with a highlyvariable number of processes. The cornerstone of many adaptive algorithms is an adaptive mechanism to collect uptodate information from all participating processes. To date, all known collect algorithms either have nonlinear step complexity or they are impractical because of unrealistic memory overhead. This paper
Computing with reads and writes in the absence of step contention
 In Proceedings of the 19th International Symposium on Distributed Computing (DISC’05
, 2005
"... Abstract. This paper studies implementations of concurrent objects that exploit the absence of step contention. These implementations use only reads and writes when a process is running solo. The other processes might be busy with other objects, swappedout, failed, or simply delayed by a contention ..."
Abstract

Cited by 17 (13 self)
 Add to MetaCart
Abstract. This paper studies implementations of concurrent objects that exploit the absence of step contention. These implementations use only reads and writes when a process is running solo. The other processes might be busy with other objects, swappedout, failed, or simply delayed by a contention manager. We study in this paper two classes of such implementations, according to how they handle the case of step contention. The first kind, called obstructionfree implementations, are not required to terminate in that case. The second kind, called solofast implementations, terminate using powerful operations (e.g., C&S). We present a generic obstructionfree object implementation that has a linear contentionfree step complexity (number of reads and writes taken by a process running solo) and uses a linear number of read/write objects. We show that these complexities are asymptotically optimal, and hence generic obstructionfree implementations are inherently slow. We also prove that obstructionfree implementations cannot be gracefully degrading, namely, be nonblocking when the contention manager operates correctly, and remain (at least) obstructionfree when the contention manager misbehaves. Finally, we show that any object has a solofast implementation, based on a solofast implementation of consensus. The implementation has linear contentionfree step complexity, and we conjecture solofast implementations must have nonconstant step complexity, i.e., they are also inherently slow. 1
A Simple Algorithmic Characterization of Uniform Solvability (Extended Abstract)
 Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2002
, 2002
"... The HerlihyShavit (HS) conditions characterizing the solvability of asynchronous tasks over n processors have been a milestone in the development of the theory of distributed computing. Yet, they were of no help when researcher sought algorithms that do not depend on n. To help in this pursuit we i ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
The HerlihyShavit (HS) conditions characterizing the solvability of asynchronous tasks over n processors have been a milestone in the development of the theory of distributed computing. Yet, they were of no help when researcher sought algorithms that do not depend on n. To help in this pursuit we investigate the uniform solvability of an infinite uniform sequence of tasks T 0 , T 1 , T 2 , ..., where T i is a task over processors p 0 , p 1 , ..., p i , and T i extends T i1 . We say that such a sequence is uniformly solvable if there exit protocols to solve each T i and the protocol for T i extends the protocol for T i1 . This paper establishes that although each T i may be solvable, the uniform sequence is not necessarily uniformly solvable. We show this by proposing a novel uniform sequence of solvable tasks and proving that the sequence is not amenable to a uniform solution. We then extend the HS conditions for a task over n processors, to uniform solvability in a natural way. The technique we use to accomplish this is to generalize the alternative algorithmic proof, by Borowsky and Gafni, of the HS conditions, by showing that the infinite uniform sequence of task of Immediate Snapshots is uniformly solvable. A side benefit of the technique is a widely applicable methodology for the development of uniform protocols.
P.: The Complexity of ObstructionFree Implementations
 J. ACM
, 2009
"... Obstructionfree implementations of concurrent objects are optimized for the common case where there is no step contention, and were recently advocated as an solution to the costs associated with synchronization without locks. In this paper, we study this claim and this goes through precisely defini ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Obstructionfree implementations of concurrent objects are optimized for the common case where there is no step contention, and were recently advocated as an solution to the costs associated with synchronization without locks. In this paper, we study this claim and this goes through precisely defining the notions of obstructionfreedom and step contention. We consider several classes of obstructionfree implementations, present corresponding generic object implementations, and prove lower bounds on their complexity. Viewed collectively, our results establish that the worstcase operation time complexity of obstructionfree implementations is high, even in the absence of step contention. We also show that lockbased implementations are not subject to some of the timecomplexity lower bounds we present.
Time and space lower bounds for implementations using cas
 In DISC
, 2005
"... Abstract. This paper presents lower bounds on the time and spacecomplexity of implementations that use the k compareandswap (kCAS) synchronization primitives. We prove that the use of kCAS primitives cannot improve neither the time nor the spacecomplexity of implementations of widelyused con ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. This paper presents lower bounds on the time and spacecomplexity of implementations that use the k compareandswap (kCAS) synchronization primitives. We prove that the use of kCAS primitives cannot improve neither the time nor the spacecomplexity of implementations of widelyused concurrent objects, such as counter, stack, queue, and collect. Surprisingly, the use of kCAS may even increase the space complexity required by such implementations. We prove that the worstcase average number of steps performed by processes for any nprocess implementation of a counter, stack or queue object is Ω(log k+1 n), even if the implementation can use jCAS for j ≤ k. This bound holds even if a kCAS operation is allowed to read the k values of the objects it accesses and return these values to the calling process. This bound is tight. We also consider more realistic nonreading kCAS primitives. An operation of a nonreading kCAS primitive is only allowed to return a success/failure indication. For implementations of the collect object that use such primitives, we prove that the worstcase average number of steps performed by processes is Ω(log 2 n), regardless of the value of k. This implies a round complexity lower bound of Ω(log 2 n) for such implementations. As there is an O(log 2 n) round complexity implementation of collect that uses only reads and writes, these results establish that nonreading kCAS is no stronger than read and write for collect implementation round complexity. We also prove that kCAS does not improve the space complexity of implementing many objects (including counter, stack, queue, and singlewriter snapshot). An implementation has to use at least n base objects even if kCAS is allowed, and if all operations (other than read) swap exactly k base objects, then the space complexity must be at least k · n. 1
Computing with infinitely many processes under assumptions on concurrency and participation
 In 14th Int. Symp. on DIStributed Comp. (DISC
, 2000
"... We explore four classic problems in concurrent computing (election, mutual exclusion, consensus, and naming) when the number of processes which may participate is infinite. Partial information about the number of actually participating processes and the concurrency level is shown to affect the possi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We explore four classic problems in concurrent computing (election, mutual exclusion, consensus, and naming) when the number of processes which may participate is infinite. Partial information about the number of actually participating processes and the concurrency level is shown to affect the possibility and complexity of solving these problems. We survey and generalize work carried out in models with finite bounds on the number of processes, and prove several new results. These include improved bounds for election when participation is required (even for finitely many processes, as investigated by Styer and Peterson [SP89]) and a new adaptive starvationfree mutual exclusion algorithm for unbounded concurrency. We survey results in models with shared objects stronger than atomic registers, such as test&set bits, semaphores or readmodifywrite registers, and update them for the infinite process case.
Collective asynchronous reading with polylogarithmic worstcase overhead
 in Proceedings, 36th ACM Symposium on Theory of Computing (STOC), 2004
"... The Collect problem for an asynchronous sharedmemory system has the objective for the processors to learn all values of a collection of shared registers, while minimizing the total number of read and write operations. First abstracted by Saks, Shavit, and Woll [37], Collect is among the standard pr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The Collect problem for an asynchronous sharedmemory system has the objective for the processors to learn all values of a collection of shared registers, while minimizing the total number of read and write operations. First abstracted by Saks, Shavit, and Woll [37], Collect is among the standard problems in distributed computing, The model consists of n asynchronous processes, each with a singlewriter multireader register of a polynomial capacity. The best previously known deterministic solution performs O(n 3/2 log n) reads and writes, and it is due to Ajtai, Aspnes, Dwork, and Waarts [3]. This paper presents a new deterministic algorithm that performs O(n log 7 n) read/write operations, thus substantially improving the best previous upper bound. Using an approach based on epidemic rumorspreading, the novelty of the new algorithm is in using a family of expander graphs and ensuring
Computing in the presence of timing failures
 In Proceedings of theInternational Conference on Distributed Computing Systems (DCS
, 2007
"... Timing failures refer to a situation where the environment in which a system operates does not behave as expected regarding the timing assumptions, that is, the timing constraints are not met. In the immense body of work on the designing faulttolerant systems, the type of failures that are usually ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Timing failures refer to a situation where the environment in which a system operates does not behave as expected regarding the timing assumptions, that is, the timing constraints are not met. In the immense body of work on the designing faulttolerant systems, the type of failures that are usually considered are, process failures, link failures, messages loss and memory failures; and it is usually (implicitly) assumed that there are no timing failures. In this paper we investigate the ability to recover automatically from transient timing failures. We introduce and formally define the concept of algorithms that are resilient to timing failures, and demonstrate the importance of the new concept by presenting consensus and mutual exclusion algorithms, using atomic registers only, that are resilient to timing failures.