Results 1  10
of
12
Dynamic Networks are as fast as static networks
 In 29th Annual Symposium on Foundations of Computer Science
, 1988
"... This paper gives an efficient simulation to show that dynamic networks are as fast as static ones up to a constant multiplicative factor. That is, any task can be performed in a dynamic asynchronous network essentially as fast as in a static synchronous network. The simulation protocol is based on a ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
This paper gives an efficient simulation to show that dynamic networks are as fast as static ones up to a constant multiplicative factor. That is, any task can be performed in a dynamic asynchronous network essentially as fast as in a static synchronous network. The simulation protocol is based on a new approach, perceiving "locality " as the key to fast adaptation to changes in network topology. The heart of our simulation is a new technique, called a dynamic synchronizer which achieves "local" simulation of a global "clock" in a dynamic asynchronous network. Using this result we obtain improved solutions to a number of well known problems on dynamic networks. It can also be used to improve the solution to certain static network problems. 1 Introduction The Dynamic Asynchronous network, where links may repeatedly fail and recover, is a realistic model of existing commercial communication networks, such the ARPANET [23]. Design and analysis of protocols for such networks is much more...
The Local Detection Paradigm and its Applications to SelfStabilization
"... A new paradigm for the design of selfstabilizing distributed algorithms, called local detection, is introduced. The essence of the paradigm is in defining a local condition based on the state of a processor and its immediate neighborhood, such that the system is in a globally legal state if and onl ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
A new paradigm for the design of selfstabilizing distributed algorithms, called local detection, is introduced. The essence of the paradigm is in defining a local condition based on the state of a processor and its immediate neighborhood, such that the system is in a globally legal state if and only if the local condition is satisfied at all the nodes. In this work we also extend the model of selfstabilizing networks traditionally assuming memory failure to include the model of dynamic networks (assuming edge failures and recoveries). We apply the paradigm to the extended model which we call "dynamic selfstabilizing networks. " Without loss of generality, we present the results in the least restrictive shared memory model of read/write atomicity, to which end we construct basic information transfer primitives. Using local detection, we develop deterministic and randomized selfstabilizing algorithms that maintain a rooted spanning tree in a general network whose topology changes dynamically. The deterministic algorithm assumes unique identities while the randomized assumes an anonymous network. The algorithms use a constant number of memory words per edge in each node; and both The size of memory words and of messages is the number of bits necessary to represent a node identity (typically O(log n) bits where n is the size of the network). These algorithms provide for the easy construction of selfstabilizing protocols for numerous tasks: reset, routing, topologyupdate and selfstabilization transformers that automatically selfstabilize existing protocols for which local detection conditions can be defined.
Optimal Maintenance of Replicated Information
 In Proc. 31st IEEE Symp. on Foundations of Computer Science
, 1993
"... Those who cannot remember the past, are condemned to repeat it." (Philosopher George Santayana) In this paper we show that keeping track of history enables significant improvements in the communication complexity of dynamic networks protocols. We improve the communication complexity for solving ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
Those who cannot remember the past, are condemned to repeat it." (Philosopher George Santayana) In this paper we show that keeping track of history enables significant improvements in the communication complexity of dynamic networks protocols. We improve the communication complexity for solving any graph problem from \Theta(E) to \Theta(V ), thus achieving the lower bound. Moreover, O(V ) is also our amortized complexity of solving any function (not only graph functions) defined on the local inputs of the nodes. This proves, for the first time, that amortized communication complexity, i.e. incremental cost of adapting to a single topology change, can be smaller than the communication complexity of solving the problem from scratch. This also has a practical importance: in real networks the topology and the local inputs of the nodes change.
A Time Optimal SelfStabilizing Synchronizer Using A Phase Clock
, 2006
"... A synchronizer with a phase counter (sometimes called asynchronous phase clock) is an asynchronous distributed algorithm, where each node maintains a local ‘pulse counter’ that simulates the global clock in a synchronous network. In this paper we present a time optimal selfstabilizing scheme for su ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
A synchronizer with a phase counter (sometimes called asynchronous phase clock) is an asynchronous distributed algorithm, where each node maintains a local ‘pulse counter’ that simulates the global clock in a synchronous network. In this paper we present a time optimal selfstabilizing scheme for such a synchronizer, assuming unbounded counters. We give a simple rule by which each node can compute its pulse number as a function of its neighbors ’ pulse numbers. We also show that some of the popular correction functions for phase clock synchronization are not selfstabilizing in asynchronous networks. Using our rule, the counters stabilize in time bounded by the diameter of the network, without invoking global operations. We argue that the use of unbounded counters can be justified by the availability of memory for counters that are large enough to be practically unbounded, and by the existence of reset protocols that can be used to restart the counters in some rare cases where faults will make this necessary.
Maintenance of a Spanning Tree in Dynamic Networks
 Proceedings of the 13th International Symposium on Distributed Computing (DISC’99
, 1999
"... Abstract. Many crucial network tasks such as database maintenance can be efficiently carried out given a tree that spans the network. By maintaining such a spanning tree, rather than constructing it ”fromscratch” due to every topology change, one can improve the efficiency of the tree construction, ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. Many crucial network tasks such as database maintenance can be efficiently carried out given a tree that spans the network. By maintaining such a spanning tree, rather than constructing it ”fromscratch” due to every topology change, one can improve the efficiency of the tree construction, as well as the efficiency of the protocols that use the tree. We present a protocol for this task which has communication complexity that is linear in the “actual ” size of the biggest connected component. The time complexity of our protocol has only a polylogarithmic overhead in the “actual ” size of the biggest connected component. The communication complexity of the previous solution, which was considered communication optimal, was linear in the network size, that is, unbounded as a function of the “actual ” size of the biggest connected component. The overhead in the time measure of the previous solution was polynomial in the network size. In an asynchronous network it may not be clear what is the meaning of the “actual ” size of the connected component at a given time. To capture this notion we define the virtual component and show that in asynchronous networks, in a sense, the notion of the virtual component is the closest one can get to the notion of the “actual ” component.
Optimal Maintenance of a Spanning Tree ∗
, 2008
"... “Those who cannot remember the past are condemned to repeat it.” (George Santayana) In this paper, we show that keeping track of history enables significant improvements in the communication complexity of dynamic network protocols. We present a communication optimal maintenance of a spanning tree in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
“Those who cannot remember the past are condemned to repeat it.” (George Santayana) In this paper, we show that keeping track of history enables significant improvements in the communication complexity of dynamic network protocols. We present a communication optimal maintenance of a spanning tree in a dynamic network. The amortized (on the number of topological changes) message complexity is O(V), where V is the number of nodes in the network. The message size used by the algorithm is O(log ID) where ID  is the size of the name space of the nodes. Typically, log ID  = O(log V). Previous algorithms that adapt to dynamic networks involved Ω(E) messages per topological change—inherently paying for recomputation of the tree from scratch. Spanning trees are essential components in many distributed algorithms. Some examples include broadcast (dissemination of messages to all network nodes), multicast, reset (general adaptation of static algorithms to dynamic networks), routing, termination detection, and more. Thus, our efficient maintenance of a spanning tree implies the improvement of algorithms for these tasks. Our results are obtained using a novel technique to save communication. A node uses information received in the past in order to deduce present information from the fact that certain messages were NOT sent by the node’s neighbor. This technique is one of our main contributions.
On Distributed Verification
, 2006
"... This paper describes the invited talk given at the 8th International ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper describes the invited talk given at the 8th International
Time Optimal SelfStabilizing Synchronization (Extended Abstract)
 In Proc. 25th Annual ACM Symposium on the Theory of Computing
, 1993
"... In this paper we present a time optimal selfstabilizing scheme for network synchronization. Our construction has two parts. First, we give a simple rule by which each node can compute its pulse number as a function of its neighbors' pulse numbers. The rule we give stabilizes in time bounded by the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we present a time optimal selfstabilizing scheme for network synchronization. Our construction has two parts. First, we give a simple rule by which each node can compute its pulse number as a function of its neighbors' pulse numbers. The rule we give stabilizes in time bounded by the diameter of the network, does not invoke global operations, and does not require any additional memory space. However, it assumes that pulse numbers may grow unboundedly. The second part of the construction (which is of independent interest on its own right) takes care of this problem. Specifically, we present...