Results 1  10
of
112
The Topological Structure of Asynchronous Computability
 JOURNAL OF THE ACM
, 1996
"... We give necessary and sufficient combinatorial conditions characterizing the tasks that can be solved by asynchronous processes, of which all but one can fail, that communicate by reading and writing a shared memory. We introduce a new formalism for tasks, based on notions from classical algebra ..."
Abstract

Cited by 150 (11 self)
 Add to MetaCart
We give necessary and sufficient combinatorial conditions characterizing the tasks that can be solved by asynchronous processes, of which all but one can fail, that communicate by reading and writing a shared memory. We introduce a new formalism for tasks, based on notions from classical algebraic and combinatorial topology, in which a task's possible input and output values are each associated with highdimensional geometric structures called simplicial complexes. We characterize computability in terms of the topological properties of these complexes. This characterization has a surprising geometric interpretation: a task is solvable if and only if the complex representing the task's allowable inputs can be mapped to the complex representing the task's allowable outputs by a function satisfying certain simple regularity properties. Our formalism thus replaces the "operational" notion of a waitfree decision task, expressed in terms of interleaved computations unfolding ...
Sharing Memory Robustly in MessagePassing Systems
, 1990
"... Emulators that translate algorithms from the sharedmemory model to two different messagepassing models are presented. Both are achieved by implementing a waitfree, atomic, singlewriter multireader register in unreliable, asynchronous networks. The two messagepassing models considered are a com ..."
Abstract

Cited by 144 (8 self)
 Add to MetaCart
Emulators that translate algorithms from the sharedmemory model to two different messagepassing models are presented. Both are achieved by implementing a waitfree, atomic, singlewriter multireader register in unreliable, asynchronous networks. The two messagepassing models considered are a complete network with processor failures and an arbitrary network with dynamic link failures. These results make it possible to view the sharedmemory model as a higherlevel language for designing algorithms in asynchronous distributed systems. Any waitfree algorithm based on atomic, singlewriter multireader registers can be automatically emulated in messagepassing systems. The overhead introduced by these emulations is polynomial in the number of processors in the systems. Immediate new results are obtained by applying the emulators to known sharedmemory algorithms. These include, among others, protocols to solve the following problems in the messagepassing model in the presence of processor or link failures; multiwriter multireader registers, concurrent timestamp systems, £exclusion, atomic snapshots, randomized consensus, and implementation of a class of data structures.
An Asynchronous Model of Locality, Failure, and Process Mobility
 THEORETICAL COMPUTER SCIENCE
, 1997
"... We present a model of distributed computation which is based on a fragment of the picalculus relying on asynchronous communication. We enrich the model with the following features: the explicit distribution of processes to locations, the failure of locations and their detection, and the mobility of ..."
Abstract

Cited by 116 (4 self)
 Add to MetaCart
We present a model of distributed computation which is based on a fragment of the picalculus relying on asynchronous communication. We enrich the model with the following features: the explicit distribution of processes to locations, the failure of locations and their detection, and the mobility of processes. Our contributions are two folds. At the specification level, we give a synthetic and flexible formalization of the features mentioned above. At the verification level, we provide original methods to reason about the bisimilarity of processes in the presence of failures.
Algebraic Topology And Concurrency
 Theoretical Computer Science
, 1998
"... This article is intended to provide some new insights about concurrency theory using ideas from geometry, and more specifically from algebraic topology. The aim of the paper is twofold: we justify applications of geometrical methods in concurrency through some chosen examples and we give the mathem ..."
Abstract

Cited by 52 (12 self)
 Add to MetaCart
This article is intended to provide some new insights about concurrency theory using ideas from geometry, and more specifically from algebraic topology. The aim of the paper is twofold: we justify applications of geometrical methods in concurrency through some chosen examples and we give the mathematical foundations needed to understand the geometric phenomenon that we identify. In particular we show that the usual notion of homotopy has to be refined to take into account some partial ordering describing the way time goes. This gives rise to some new interesting mathematical problems as well as give some common grounds to computerscientific problems that have not been precisely related otherwise in the past. The organization of the paper is as follows. In Section 2 we explain to which extent we can use some geometrical ideas in computer science: we list a few of the potential or well known areas of application and try to exemplify some of the properties of concurrent (and distributed) systems we are interested in. We first explain the interest of using some geometric ideas for semantical reasons. Then we take the example of concurrent databases with the problem of finding deadlocks and with some aspects of serializability theory. More general questions about schedules can be asked as well and related to some geometric considerations, even for scheduling microinstructions (and not only coarsegrained transactions as for databases). The final example is the one of faulttolerant protocols for distributed systems, where subtle scheduling properties go into play. In Section 3 we give the first few definitions needed for modeling the topological spaces arising from Section 2. Basically, we need to define a topological space containing all traces of executions of the concu...
FailAwareness in Timed Asynchronous Systems
, 2003
"... We address the problem of the impossibility of implementing synchronous faulttolerant service specifications in asynchronous distributed systems. We introduce a method for weakening a synchronous service specification so that it becomes implementable in "timed" asynchronous systems, that ..."
Abstract

Cited by 49 (15 self)
 Add to MetaCart
(Show Context)
We address the problem of the impossibility of implementing synchronous faulttolerant service specifications in asynchronous distributed systems. We introduce a method for weakening a synchronous service specification so that it becomes implementable in "timed" asynchronous systems, that is, asynchronous systems in which processes have access to local hardware clocks. The method (1) adds to a service interface an exception indicator so that a client knows at any time if a server is currently providing its standard "synchronous" semantics or some other specified exceptional semantics, (2) the standard behavior provided when the exception indicator does not signal an exception is "similar" to the original synchronous service behavior, and (3) a server has to provide its standard semantics whenever the underlying communication and process services exhibit "synchronous behavior ". To illustrate our method, we show how the specification of a synchronous datagram service and an internal clock synchronization service can be transformed into a failaware service specification. Further illustrations of the usefulness of failaware services are provided by describing a railway crossing service and a failaware weak group membership service.
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
The BG distributed simulation algorithm
, 1997
"... A snapshot shared memory algorithm is presented, allowing a set off+1processes, any f of which may exhibit stopping failures, to “simulate” a larger number n of processes, also with at most f failures. One application of this simulation algorithm is to convert an arbitrary kfaulttolerant nprocess ..."
Abstract

Cited by 43 (18 self)
 Add to MetaCart
A snapshot shared memory algorithm is presented, allowing a set off+1processes, any f of which may exhibit stopping failures, to “simulate” a larger number n of processes, also with at most f failures. One application of this simulation algorithm is to convert an arbitrary kfaulttolerant nprocess solution for the ksetagreement problem into a waitfree k+1process solution for the same problem. Since thek+1process ksetagreement problem has been shown to have no waitfree solution [4, 16, 24], this transformation implies that there is no kfaulttolerant solution to the nprocess ksetagreement problem, for any n. More generally, the algorithm satisfies the requirements of a faulttolerant distributed simulation. The distributed simulation implements a notion of faulttolerant reducibility between decision problems. These notions are defined, and examples of their use are provided. The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity, expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include a safe agreement module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multitry snapshot strategy. The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write shared memory systems is also presented.
An Adaptive Collect Algorithm with Applications
 Distributed Computing
, 2001
"... In a sharedmemory distributed system, n independent asynchronous processes communicate by reading and writing to shared memory. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
(Show Context)
In a sharedmemory distributed system, n independent asynchronous processes communicate by reading and writing to shared memory. An algorithm is adaptive (to total contention) if its step complexity depends only on the actual number, k, of active processes in the execution; this number is unknown in advance and may change in different executions of the algorithm. Adaptive algorithms are inherently waitfree, providing faulttolerance in the presence of an arbitrary number of crash failures and different processes' speed. A waitfree adaptive collect algorithm with O(k) step complexity is presented, together with its applications in waitfree adaptive algorithms for atomic snapshots, immediate snapshots and renaming. Keywords: contentionsensitive complexity, waitfree algorithms, asynchronous sharedmemory systems, read/write registers, atomic snapshots, immediate atomic snapshots, renaming. Work supported by the fund for the promotion of research in the Technion. y Department of Computer Science, The Technion, Haifa 32000, Israel. hagit@cs.technion.ac.il. z Department of Computer Science, The Technion, Haifa 32000, Israel. leonf@cs.technion.ac.il. x Computer Science Department, UCLA. eli@cs.ucla.edu. 1
A simple constructive computability theorem for waitfree computation
 In: Proceedings of the 1994 ACM Symposium on Theory of Computing 243–252
, 1994
"... I ..."
(Show Context)
Set Consensus Using Arbitrary Objects
 In Proceedings of the thirteenth annual ACM symposium on Principles of distributed computing
, 1994
"... In the (N; k)consensus task, each process in a group starts with a private input value, communicates with the others by applying operations to shared objects, and then halts after choosing a private output value. Each process is required to choose some process's input value, and the set of val ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
In the (N; k)consensus task, each process in a group starts with a private input value, communicates with the others by applying operations to shared objects, and then halts after choosing a private output value. Each process is required to choose some process's input value, and the set of values chosen should have size at most k. This problem, first proposed by Chaudhuri in 1990, has been extensively studied using asynchronous read/write memory. In this paper, we investigate this problem in a more powerful asynchronous model in which processes may communicate through objects other than read/write memory, such as test&set variables. We prove two general theorems about the solvability of set consensus using objects other than read/write registers. The first theorem addresses the question of what kinds of shared objects are needed to solve (N; k)consensus, and the second addresses the question of what kinds of tasks can be solved by N processes using (M; j)consensus objects, for M N...