Results 11  20
of
26
Operationvalency and the cost of coordination
 In Proceedings of the 22nd Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2003
"... This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return valu ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return value, the new technique allows us to derive a collection of realistic lower bounds for lockfree implementations of concurrent objects such as linearizable queues, stacks, sets, hash tables, shared counters, approximate agreement, and more. By realistic we mean that they follow the realworld model introduced by Dwork, Herlihy, and Waarts, counting both memoryreferences and memorystalls due to contention, and that they allow the combined use of read, write, and readmodifywrite operations available on current machines. By using the operationvalency technique, we derive an f~(X/~) noncached shared memory accesses lower bound on the worstcase time complexity of lockfree implementations of objects in Influence(n), a wide class of concurrent objects including all of those mentioned above, in which an individual operation can be influenced by all others. We also prove the existence of a fundamental relationship between the space complexity, latency, contention, and "influence level " of any lockfree object implementation. Our results are broad in that they hold for implementations combining read/write memory and any collection of readmodifywrite operations, and in that they apply even if shared memory words have unbounded size.
An O(1) RMRs leader election algorithm
 In Proc. ACM PODC 2006
, 2006
"... The leader election problem is a fundamental coordination problem. We present leader election algorithms for multiprocessor systems where processes communicate by reading and writing shared memory asynchronously, and do not fail. In particular, we consider the cachecoherent (CC) and distributed shar ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The leader election problem is a fundamental coordination problem. We present leader election algorithms for multiprocessor systems where processes communicate by reading and writing shared memory asynchronously, and do not fail. In particular, we consider the cachecoherent (CC) and distributed shared memory (DSM) models of such systems. We present leader election algorithms that perform a constant number of remote memory references (RMRs) in the worst case. Our algorithms use splitterlike objects [6, 9] in a novel way, by organizing active processes into teams that share work. As there is an Ω(log n) lower bound on the RMR complexity of mutual exclusion for n processes using reads and writes only [10], our result separates the mutual exclusion and leader election problems in terms of RMR complexity in both the CC and DSM models. Our result also implies that any algorithm using reads, writes and onetime testandset objects can be simulated by an algorithm using reads and writes with only a constant blowup of the RMR complexity; proving this is easy in the CC model, but presents subtle challenges in
On the Inherent Weakness of Conditional Synchronization Primitives
 In Proceedings of the 23rd Annual ACM Symposium on Principles of Distributed Computing
, 2004
"... The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus c ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus can be solved using instances of the primitive and read write registers. Conditional synchronization primitives, such as compareandswap and loadlinked/storeconditional, can implement deterministic waitfree consensus for any number of processes (they have consensus number ∞), and are thus considered to be among the strongest synchronization primitives. To some extent because of that, compareandswap and loadlinked/storeconditional have became the synchronization primitives of choice, and have been implemented in hardware in many multiprocessor architectures. This paper shows that, though they are strong in the context of consensus, conditional synchronization primitives are not efficient in terms of memory space for implementing many key objects. Our results hold for starvationfree implementations of mutual exclusion, and for waitfree implementations of a large class of concurrent objects, that we call Visible(n). Roughly, Visible(n) is a class that includes all objects that support some operation that must perform a “visible”
Timingbased mutual exclusion with local spinning
 In 17th international symposium on distributed computing, October 2003. LNCS 2848
, 2003
"... Abstract We consider the time complexity of sharedmemory mutual exclusion algorithms based on reads, writes, and comparison primitives under the remotememoryreference (RMR) time measure. For asynchronous systems, a lower bound of \Omega (log N / log log N) RMRs per criticalsection entry has been ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract We consider the time complexity of sharedmemory mutual exclusion algorithms based on reads, writes, and comparison primitives under the remotememoryreference (RMR) time measure. For asynchronous systems, a lower bound of \Omega (log N / log log N) RMRs per criticalsection entry has been established in previous work, where N is the number of processes. Also, algorithms with O(log N) time complexity are known. Thus, for algorithms in this class, logarithmic or nearlogarithmic RMR time complexity is fundamentally required.
SoloValency and the Cost of Coordination
, 2007
"... This paper introduces solovalency, a variation on the valency proof technique originated by Fischer, Lynch, and Paterson. The new technique focuses on critical events that influence the responses of solo runs by individual operations, rather than on critical events that influence a protocol’s singl ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper introduces solovalency, a variation on the valency proof technique originated by Fischer, Lynch, and Paterson. The new technique focuses on critical events that influence the responses of solo runs by individual operations, rather than on critical events that influence a protocol’s single decision value. It allows us to derive √ n lower bounds on the time to perform an operation for lockfree implementations of concurrent objects such as linearizable queues, stacks, sets, hash tables, counters, approximate agreement, and more. Time is measured as the number of distinct base objects accessed and the number of stalls caused by contention in accessing memory, incurred by a process as it performs a single operation. We introduce the influence level metric that quantifies the extent to which the response of a solo execution of one process can be changed by other processes. We then prove the existence of a relationship between the space complexity, latency, contention and influence level of all lockfree object implementations. Our results are broad in that they hold for implementations that may use any collection of readmodifywrite operations in addition to read and write, and in that they apply even if base objects have unbounded size. 1
A Time Complexity Lower Bound for Adaptive Mutual Exclusion ∗
, 2007
"... We consider the time complexity of adaptive mutual exclusion algorithms, where “time ” is measured by counting the number of remote memory references required per criticalsection access. For systems that support (only) read, write, and comparison primitives (such as compareandswap), we establish ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider the time complexity of adaptive mutual exclusion algorithms, where “time ” is measured by counting the number of remote memory references required per criticalsection access. For systems that support (only) read, write, and comparison primitives (such as compareandswap), we establish a lower bound that precludes a deterministic algorithm with o(k) time complexity, where k is point contention. In particular, it is impossible to construct a deterministic O(log k) algorithm based on such primitives.
Adaptive and Efficient Mutual Exclusion (Extended Abstract)
, 2000
"... ] Hagit Attiya and Vita Bortnikov Department of Computer Science The Technion Haifa 32000, Israel hagit@cs.technion.ac.il vitab@cs.technion.ac.il ABSTRACT A distributed algorithm is adaptive if its performance depends on k, the number of processes that are concurrently active during the algo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
] Hagit Attiya and Vita Bortnikov Department of Computer Science The Technion Haifa 32000, Israel hagit@cs.technion.ac.il vitab@cs.technion.ac.il ABSTRACT A distributed algorithm is adaptive if its performance depends on k, the number of processes that are concurrently active during the algorithm execution (rather than on n, the total number of processes). This paper presents adaptive algorithm for mutual exclusion using only read and write operations. The worst case step complexity cannot be a measure for the performance of mutual exclusion algorithms, because it is always unbounded in the presence of contention. Therefore, a number of different parameters are used to measure the algorithm's performance: The remote step complexity is the maximal number of steps performed by a process where a wait is counted as one step. The system response time is the time interval between subsequent entries to the critical section, where one time unit is the minimal interval in which every a...
ABSTRACT ConstantRMR Implementations of CAS and Other Synchronization Primitives Using Read and Write Operations
"... We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and loadlinked/storeconditional using only a constant number of remote memory references (RMRs), in bo ..."
Abstract
 Add to MetaCart
We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and loadlinked/storeconditional using only a constant number of remote memory references (RMRs), in both the cachecoherent and the distributedsharedmemory models of such multiprocessors. Our implementations are blocking, rather than waitfree: they ensure progress provided all processes that invoke the implemented primitive are live. Our results imply that any algorithm using read and write operations, comparison primitives, and loadlinked/storeconditional, can be simulated by an algorithm that uses read and write operations only, with at most a constant blowup in RMR complexity.
Appendix A
"... 60dB 4.20 512 96 ETSIA ETSIA ETSI1 60dB 4.20 1536 512 AWGN140 AWGN140 Draft Recommendation G.992.2 140 14 T1.601 #9 1536kbps 256kbps 49 Annex A G.992.2 15 T1.601 #9 1536kbps 256kbps 24 DSL 16 Shortened T1.601#7 1536kbps 256kbps 24 HDSL Table 47. Extended Reach Test Cases NOTE1: A goal of futu ..."
Abstract
 Add to MetaCart
60dB 4.20 512 96 ETSIA ETSIA ETSI1 60dB 4.20 1536 512 AWGN140 AWGN140 Draft Recommendation G.992.2 140 14 T1.601 #9 1536kbps 256kbps 49 Annex A G.992.2 15 T1.601 #9 1536kbps 256kbps 24 DSL 16 Shortened T1.601#7 1536kbps 256kbps 24 HDSL Table 47. Extended Reach Test Cases NOTE1: A goal of future enhancements of this Recommendation is to make the "Extended Reach Cases" mandatory. NOTE2: Performance levels do not reflect the effect of customer premise wiring, which is expected to reduce data rate.G.992.2G.992.2G.992.2 Draft Recommendation G.992.2 139 ANNEX D D.1 System Performance for North America All test loops specified in this section shall be used for G.992.2 and testing shall confirm to the following: . No power cutback on upstream transmitter. . Margin=4 dB . BER=10 7 . Background noise = 140 dBm/Hz . Rates, except where noted,