Results 1  10
of
16
Probabilistic Algorithms for the Wakeup Problem in SingleHop Radio Networks
 In Proceedings of 13 th Annual International Symposium on Algorithms and Computation (ISAAC
, 2002
"... We consider the problem of waking up n processors in a completely broadcast system. We analyze this problem in both globally and locally synchronous models, with or without n being known to processors and with or without labeling of processors. The main question we answer is: how fast we can wake ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
We consider the problem of waking up n processors in a completely broadcast system. We analyze this problem in both globally and locally synchronous models, with or without n being known to processors and with or without labeling of processors. The main question we answer is: how fast we can wake all the processors up with probability 1e in each of these eight models. In [11] a logarithmic waking algorithm for the strongest set of assumptions is described, while for weaker models only linear and quadratic algorithms were obtained. We prove that in the weakest model (local synchronization, no knowledge of n or labeling) the best waking time is O(n/logn). We also show logarithmic or polylogarithmic waking algorithms for all stronger models, which in some cases gives an exponential improvement over previous results.
The wakeup problem in synchronous broadcast systems (Extended Abstract)
, 2000
"... This paper studies the differences between two levels of synchronization in a distributed broadcast system (or a multiple access channel). In the globally synchronous model, all processors have access to a global clock. In the locally synchronous model, processors have local clocks ticking at the s ..."
Abstract

Cited by 50 (9 self)
 Add to MetaCart
This paper studies the differences between two levels of synchronization in a distributed broadcast system (or a multiple access channel). In the globally synchronous model, all processors have access to a global clock. In the locally synchronous model, processors have local clocks ticking at the same rate, but each clock starts individually, when the processor wakes up. We consider the fundamental problem of waking up all of n processors of a completely connected broadcast system. Some processors wake up spontaneously, while others have to be woken up. Only wake processors can...
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
A Better Wakeup in Radio Networks
, 2004
"... We present an improved algorithm to wake up a multihop adhoc radio network. The goal is to have all the nodes activated, when some of them may wake up spontaneously at arbitrary times and the remaining nodes need to be awoken by the already active ones. The best previously known wakeup algorithm ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
We present an improved algorithm to wake up a multihop adhoc radio network. The goal is to have all the nodes activated, when some of them may wake up spontaneously at arbitrary times and the remaining nodes need to be awoken by the already active ones. The best previously known wakeup algorithm was given by Chrobak, G¸asieniec and Kowalski [11], and operated in time O(n 5/3 log n), where n is the number of nodes. We give an algorithm with the running time O(n 3/2 log n). This also yields better algorithms for other synchronizationtype primitives, like leader election and localclocks synchronization, each with a time performance that differs from that of wakeup by an extra factor of O(log n) only, and improves the best previously known method for the problem by a factor of n 1/6. A wakeup algorithm is a schedule of transmissions for each node. It can be represented as a collection of binary sequences. Useful properties of such collections have been abstracted to define a (radio) synchronizer. It has been known that good radio synchronizers exist and previous algorithms [17, 11] relied on this. We show how to construct such synchronizers in polynomial time, from suitable constructible expanders. As an application, we obtain a wakeup protocol for a multipleaccess channel that activates the network in time O(k 2 polylog n), where k is the number of stations that wake up spontaneously, and which can be found in time polynomial in n. We extend the notion of synchronizers to universal synchronizers. We show that there exist universal synchronizers with parameters that guarantee time O(n 3/2 log n) of wakeup.
Computing in Totally Anonymous Asynchronous Shared Memory Systems (Extended Abstract)
 INFORMATION AND COMPUTATION
, 2002
"... In the totally anonymous shared memory model of asynchronous distributed computing, processes have no id's and run identical programs. Moreover, processes have identical interface to the shared memory, and in particular, there are no singlewriter registers. This paper assumes that processes do ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
In the totally anonymous shared memory model of asynchronous distributed computing, processes have no id's and run identical programs. Moreover, processes have identical interface to the shared memory, and in particular, there are no singlewriter registers. This paper assumes that processes do not fail, and the shared memory consists only of read/write registers, which are initialized to some default value. A complete characterization of the functions and relations that can be computed within this model is presented. The consensus problem is an important relation which can be computed. Unlike functions, which can be computed with two registers, the consensus protocol uses a linear number of shared registers and rounds. The paper proves logarithmic lower bounds on the number of registers and rounds needed for solving consensus in this model, indicating the d...
A Time Complexity Lower Bound for Randomized Implementations of Some Shared Objects
 In Symposium on Principles of Distributed Computing
, 1998
"... Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect th ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect that all n processes are up, cannot be solved unless some process performs#for n) sharedmemory operations. Using this basic result, we derive a#230 n) lower bound on the worstcase sharedaccess time complexity of nprocess implementations of several types of objects, including fetch&increment, fetch&multiply, fetch&and, queue, and stack. (The worstcase sharedaccess time complexity of an implementation is the number of sharedmemory operations that a process performs, in the worstcase, in order to complete a single operation on the implementation.) Our lower bound is strong in several ways: it holds even if (1) sharedmemory has an infinite number of words, each of unbounded size, (2) sh...
Computing with Faulty Shared Objects
, 1995
"... This paper investigates the effects of the failure of shared objects on distributed systems. First the notion of a faulty shared object is introduced. Then upper and lower bounds on the space complexity of implementing reliable shared objects are provided. ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
This paper investigates the effects of the failure of shared objects on distributed systems. First the notion of a faulty shared object is introduced. Then upper and lower bounds on the space complexity of implementing reliable shared objects are provided.
Contentionfree Complexity of Shared Memory Algorithms
 Information and Computation
, 1994
"... Worstcase time complexity is a measure of the maximumtime needed to solve a problem over all runs. Contentionfree time complexity indicates the maximum time needed when a process executes by itself, without competition from other processes. Since contention is rare in welldesigned systems, it is ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Worstcase time complexity is a measure of the maximumtime needed to solve a problem over all runs. Contentionfree time complexity indicates the maximum time needed when a process executes by itself, without competition from other processes. Since contention is rare in welldesigned systems, it is important to design algorithms which perform well in the absence of contention. We study the contentionfree time complexity of shared memory algorithms using two measures: step complexity, which counts the number of accesses to shared registers; and register complexity, which measures the number of different registers accessed. Depending on the system architecture, one of the two measures more accurately reflects the elapsed time. We provide lower and upper bounds for the contentionfree step and register complexity of solving the mutual exclusion problem as a function of the number of processes and the size of the largest register that can be accessed in one atomic step. We also present bo...
Optimal Scheduling for Disconnected Cooperation
, 2001
"... We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before proc ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before processors begin communication they execute tasks in the order given by their schedules. Our goal is to schedule work of isolated processors so that when communication is established for the rst time, the number of redundantly executed tasks is controlled. We quantify worst case redundancy as a function of processor advancements through their schedules. In this work we rene and simplify an extant deterministic construction for schedules with n t, and we develop a new analysis of its waste. The new analysis shows that for any pair of schedules, the number of redundant tasks can be controlled for the entire range of t tasks. Our new result is asymptotically optimal: the tails of these schedules are within a 1 +O(n 1 4 ) factor of the lower bound. We also present two new deterministic constructions one for t n, and the other for t n 3=2 , which substantially improve pairwise waste for all prexes of length t= p n, and oer near optimal waste for the tails of the schedules. Finally, we present bounds for waste of any collection of k 2 processors for both deterministic and randomized constructions. 1
Waitfree Consensus in "Inphase" Multiprocessor Systems
, 1995
"... In the consensus problem in a system with n processes, each process starts with a private input value and has to choose irrevocably a decision value, which was the input value of some process of the system; moreover, all processes have to decide on the same value. This work deals with the problem of ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
In the consensus problem in a system with n processes, each process starts with a private input value and has to choose irrevocably a decision value, which was the input value of some process of the system; moreover, all processes have to decide on the same value. This work deals with the problem of waitfree fully resilient to processor crash and napping failuresconsensus of n processes in an "inphase" multiprocessor system. It proves the existence of a solution to the problem in this system by presenting a protocol which ensures that each process will reach decision within at most n(n \Gamma 3)=2+3 steps of its own in the worst case, or within n steps if no process fails.