Results 1 
8 of
8
Causal Memory: Definitions, Implementation and Programming
, 1994
"... The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to larg ..."
Abstract

Cited by 97 (9 self)
 Add to MetaCart
The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by defining causal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that are causally related. Because causal memory is weakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for messagepassing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory. College of Computing Georgia Institute of Technology Atlanta, Georgia 303320280 This ...
Efficient, Strongly Consistent Implementations of Shared Memory (Extended Abstract)
, 1992
"... ) Marios Mavronicolas ? Dan Roth ?? Aiken Computation Laboratory, Harvard University, Cambridge, MA 02138, USA Abstract. We present linearizable implementations for two distributed organizations of multiprocessor shared memory. For the full caching organization, where each process keeps a local c ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
) Marios Mavronicolas ? Dan Roth ?? Aiken Computation Laboratory, Harvard University, Cambridge, MA 02138, USA Abstract. We present linearizable implementations for two distributed organizations of multiprocessor shared memory. For the full caching organization, where each process keeps a local copy of the whole memory, we present a linearizable implementations of read/write memory objects that achieves essentially optimal efficiency and allows quantitative degradation of the less frequently employed operation. For the single ownership organization, where each memory object is "owned" by a single process which is most likely to access it frequently, our linearizable implementation allows local operations to be performed much faster (almost instantaneously) than remote ones. We suggest to combine these organizations in a "hybrid" memory structure that allows processes to access local and remote information in a transparent manner, while at a lower level of the memory consistency sys...
Consistency Conditions for MultiObject Distributed Operations
 IN INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS
, 1998
"... The traditional Distributed Shared Memory (DSM) model provides atomicity at levels of read and write on single objects. Therefore, multiobject operations such as double compare and swap, and atomic mregister assignment cannot be efficiently expressed in this model. We extend the traditional DSM mo ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The traditional Distributed Shared Memory (DSM) model provides atomicity at levels of read and write on single objects. Therefore, multiobject operations such as double compare and swap, and atomic mregister assignment cannot be efficiently expressed in this model. We extend the traditional DSM model to allow operations to span multiple objects. We show that memory consistency conditions such as sequential consistency and linearizability can be extended to this general model. We also provide algorithms to implement these consistency conditions in a distributed system.
Linearizable Read/Write Objects
 In Proceedings of TwentyNinth Annual Allerton Conference on Communication, Control and Computing
, 1999
"... We study the cost of implementing linearizable read/write objects for sharedmemory multiprocessors under various assumptions on the available timing information. We take as cost measure the worstcase response time of performing an operation in distributed implementations of virtual shared memory c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We study the cost of implementing linearizable read/write objects for sharedmemory multiprocessors under various assumptions on the available timing information. We take as cost measure the worstcase response time of performing an operation in distributed implementations of virtual shared memory consisting of such objects. It is assumed that processes have clocks that run at the same rate as real time and all messages incur a delay in the range [d \Gamma u; d] for some known constants u and d, 0 u d. In the perfect clocks model, where processes have perfectly synchronized clocks and every message incurs a delay of exactly d, we present a family of optimal linearizable implementations, parameterized by a constant fi, 0 fi 1, for which the worstcase response times for read and write operations are fid and (1 \Gamma fi)d, respectively. The parameter fi may be appropriately chosen to account for the relative frequencies of read and write operations. Our main result is the first kno...
Consistency Conditions for MultiObject Distributed Operations
"... Abstract The traditional Distributed Shared Memory (DSM) model provides atomicity at levels of read and write on single objects. Therefore, multiobject operations such as double compare and swap, and atomic mregister assignment cannot be efficiently expressed in this model. We extend the tradition ..."
Abstract
 Add to MetaCart
Abstract The traditional Distributed Shared Memory (DSM) model provides atomicity at levels of read and write on single objects. Therefore, multiobject operations such as double compare and swap, and atomic mregister assignment cannot be efficiently expressed in this model. We extend the traditional DSM model to allow operations to span multiple objects. We show that memory consistency conditions such as sequential consistency and linearizability can be extended to this general model. We also provide algorithms to implement these consistency conditions in a distributed system. 1
TimingBased, Distributed Computation: Algorithms and Impossibility Results A thesis presented by
"... to ..."
Linearizable Read/Write Objects
, 1998
"... We study the cost of using message passing to implement linearizable read/write objects for sharedmemory multiprocessors under various assumptions on the available timing information. We take as cost measures the worstcase response times for performing read and write operations in distributed impl ..."
Abstract
 Add to MetaCart
We study the cost of using message passing to implement linearizable read/write objects for sharedmemory multiprocessors under various assumptions on the available timing information. We take as cost measures the worstcase response times for performing read and write operations in distributed implementations of virtual shared memory consisting of such objects, and the sum of these response times. It is assumed that processes have clocks that run at the same rate as real time and are within of each other, for some known precision constant 0. All messages incur a delay in the range [d;u � d] for some known constants u and d, 0 u d. For the perfect clocks model, where clocks are perfectly synchronized, i.e., = 0, and every message incurs a delay of exactly d, we present a linearizable implementation which achieves worstcase response times for read and write operations of d and (1;)d, respectively � is a tradeo parameter, 0 1, which may be tuned to account for the relative frequencies of read and write operations. This implementation is optimal with respect to the sum of the worstcase response times for read and write operations. We next turn to the approximately synchronized clocks model, where clocks are only approximately
Fault Tolerance Bounds for Asynchronous Memory Consistency
, 1999
"... A waitfree distributed algorithm is one in which every process can complete an operation in a finite number of steps. A generalization of waitfreedom is tresilience, in which every process can complete an operation in a finite number of steps if no more than t processes fail (by stopping). We s ..."
Abstract
 Add to MetaCart
A waitfree distributed algorithm is one in which every process can complete an operation in a finite number of steps. A generalization of waitfreedom is tresilience, in which every process can complete an operation in a finite number of steps if no more than t processes fail (by stopping). We study the conditions under which algorithms that implement distributed shared memory (DSM) can be implemented resiliently and in a nonblocking manner on asynchronous systems, in which there is no known upper bound on message latencies. We derive upper bounds on the number of faults that can be tolerated by an object, based on the consistency of histories of certain forms. From these general bounds, we derive bounds for linearizability, sequential consistency, processor consistency, and some weaker memories. We show that these latter bounds are tight by displaying implementations that achieve them. The proof technique for the upper bounds is of independent interest, due to its applicabi...