Results 11  20
of
75
Local Management of a Global Resource in a Communication Network
 PROC. OF IEEE FOCS
, 1987
"... This paper introduces a new distributed data object called Resource Controller which provides an abstraction for managing the consumption of a global resource in a distributed system. Examples of resources that may be managed by such an object include; number of messages sent, number of nodes par ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
This paper introduces a new distributed data object called Resource Controller which provides an abstraction for managing the consumption of a global resource in a distributed system. Examples of resources that may be managed by such an object include; number of messages sent, number of nodes participating in the protocol, and total CPU time consumed. The Resource Controller object is accessed through a procedure that can be invoked at any node in the network. Before consuming a unit of resource at some node, the controlled algorithm should invoke the procedure at this node, requesting a permit to consume a unit of the resource. The procedure returns either a permit or a rejection. The key characteristics of the Resource Controller object are the constraints that it imposes on the global resource consumption. An (M; W)Controller guarantees that the total number of permits granted is at most M ; it also ensures that if a request is rejected then at least M \Gamma W permits a...
Distributed computing with advice: Information sensitivity of graph coloring
 IN 34TH INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP
, 2007
"... We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if litt ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if little advice is enough to solve the problem rapidly (i.e., much faster than in the absence of any advice), whereas it is information insensitive if it requires giving a lot of information to the nodes in order to ensure fast computation of the solution. In this paper, we study the information sensitivity of distributed graph coloring.
Sublogarithmic Distributed MIS Algorithm for Sparse Graphs using NashWilliams Decomposition
 In Journal of Distributed Computing Special Issue of selected papers from PODC
, 2008
"... We study the distributed maximal independent set (henceforth, MIS) problem on sparse graphs. Currently, there are known algorithms with a sublogarithmic running time for this problem on oriented trees and graphs of bounded degrees. We devise the first sublogarithmic algorithm for computing MIS on gr ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
We study the distributed maximal independent set (henceforth, MIS) problem on sparse graphs. Currently, there are known algorithms with a sublogarithmic running time for this problem on oriented trees and graphs of bounded degrees. We devise the first sublogarithmic algorithm for computing MIS on graphs of bounded arboricity. This is a large family of graphs that includes graphs of bounded degree, planar graphs, graphs of bounded genus, graphs of bounded treewidth, graphs that exclude a fixed minor, and many other graphs. We also devise efficient algorithms for coloring graphs from these families. These results are achieved by the following technique that may be of independent interest. Our algorithm starts with computing a certain graphtheoretic structure, called NashWilliams forestsdecomposition. Then this structure is used to compute the MIS or coloring. Our results demonstrate that this methodology is very powerful. Finally, we show nearlytight lower bounds on the running time of any distributed algorithm for computing a forestsdecomposition.
Maintaining Dynamic Sequences under Equality Tests In Polyiogarithmic Time
, 1997
"... We present a randomized and a deterministic data structure for maintaining a dynamic family of sequences under equality tests of pairs of sequences and creations of new sequences by joining or splitting existing sequences. Both data structures support equality tests in O ( 1) time. The randomized v ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We present a randomized and a deterministic data structure for maintaining a dynamic family of sequences under equality tests of pairs of sequences and creations of new sequences by joining or splitting existing sequences. Both data structures support equality tests in O ( 1) time. The randomized version supports new sequence creations in O(log 2 n) expected time where n is the length of the sequence created. The deterministic solution supports sequence creations in O (log n (log m log * m +log n)) time for the mth operation.
Connected Components in O(log 3/2 n) Parallel Time for the CREW PRAM
"... Finding the connected components of an undirected graph G = (V; E) on n = jV j vertices and m = jEj edges is a fundamental computational problem. The best known parallel algorithm for the CREW PRAM model runs in O(log 2 n) time using n 2 = log 2 n processors [6, 15]. For the CRCW PRAM model, ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Finding the connected components of an undirected graph G = (V; E) on n = jV j vertices and m = jEj edges is a fundamental computational problem. The best known parallel algorithm for the CREW PRAM model runs in O(log 2 n) time using n 2 = log 2 n processors [6, 15]. For the CRCW PRAM model, in which concurrent writing is permitted, the best known algorithm runs in O(log n) time using slightly more than (n +m)= log n processors [26, 9, 5]. Simulating this algorithm on the weaker CREW model increases its running time to O(log 2 n) [10, 19, 29]. We present here a simple algorithm that runs in O(log 3=2 n) time using n +m CREW processors. Finding an o(log 2 n) parallel connectivity algorithm for this model was an open problem for many years. 1 Introduction Let G = (V; E) be an undirected graph on n = jV j vertices and m = jEj edges. A path p of length k is a sequence of edges (e 1 ; \Delta \Delta \Delta ; e i ; \Delta \Delta \Delta ; e k ) such that e i 2 E for i = 1; \...
A Distributed Algorithm to Find kdominating Sets
, 1999
"... We consider a connected undirected graph G(n; m) with n nodes and m edges. A kdominating set D in G is a set of nodes having the property that every node in G is at most k edges away from at least one node in D. ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We consider a connected undirected graph G(n; m) with n nodes and m edges. A kdominating set D in G is a set of nodes having the property that every node in G is at most k edges away from at least one node in D.
Deploying Wireless Networks with Beeps
"... We present the discrete beeping communication model, which assumes nodes have minimal knowledge about their environment and severely limited communication capabilities. Specifically, nodes have no information regarding the local or global structure of the network, do not have access to synchronized ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We present the discrete beeping communication model, which assumes nodes have minimal knowledge about their environment and severely limited communication capabilities. Specifically, nodes have no information regarding the local or global structure of the network, do not have access to synchronized clocks and are woken up by an adversary. Moreover, instead on communicating through messages they rely solely on carrier sensing to exchange information. This model is interesting from a practical point of view, because it is possible to implement it (or emulate it) even in extremely restricted radio network environments. From a theory point of view, it shows that complex problems (such as vertex coloring) can be solved efficiently even without strong assumptions on properties of the communication model. We study the problem of interval coloring, a variant of vertex coloring specially suited for the studied beeping model. Given a set of resources, the goal of interval coloring is to assign every node a large contiguous fraction of the resources, such that neighboring nodes have disjoint resources. A kinterval coloring is one where every node gets at least a 1/k fraction of the resources. To highlight the importance of the discreteness of the model, we contrast it against a continuous variant described in [17]. We present an O(1) time algorithm that with probability 1 produces a O(∆)interval coloring. This improves an O(log n) time algorithm with the same guarantees presented in [17], and accentuates the unrealistic assumptions of the continuous model. Under the more realistic discrete model, we present a Las Vegas algorithm that solves O(∆)interval coloring in O(log n) time with high probability and describe how to adapt the algorithm for dynamic networks where nodes may join or leave. For constant degree graphs we prove a lower bound of Ω(log n) on the time required to solve interval coloring for this model against randomized algorithms. This lower bound implies that our algorithm is asymptotically optimal for constant degree graphs.
Efficient computation of implicit representations of sparse graphs
 Discrete Applied Mathematics
, 1997
"... The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on n vertices using O(n) space such that vertex adjacency is tested in O(1) time. We show he ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on n vertices using O(n) space such that vertex adjacency is tested in O(1) time. We show here how to construct such a representation efficiently by providing simple and optimal algorithms, both in a sequential and a parallel setting. Our sequential algorithm runs in O(n) time. The parallel algorithm runs in O(log n) time using O(n=log n) CRCW PRAM processors, or in O(log n log n) time using O(n = log n log
A Faster Distributed Algorithm for Computing Maximal Matchings Deterministically (Extended Abstract)
, 1999
"... ) Micha/l Ha'n'ckowiak Dept of Math and CS Adam Mickiewicz University Pozna'n, Poland Micha/l Karo'nski Dept of Math and CS Adam Mickiewicz University Pozna'n, Poland & Dept of Math and CS Emory University Atlanta, Georgia, USA Alessandro Panconesi Dept of ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
) Micha/l Ha'n'ckowiak Dept of Math and CS Adam Mickiewicz University Pozna'n, Poland Micha/l Karo'nski Dept of Math and CS Adam Mickiewicz University Pozna'n, Poland & Dept of Math and CS Emory University Atlanta, Georgia, USA Alessandro Panconesi Dept of CS University of Bologna Bologna, Italy Abstract We show that maximal matchings can be computed deterministically in O(log 4 n) rounds in the synchronous, messagepassing model of computation. This improves on an earlier result by three logfactors. 1 Introduction In this paper we show that maximal matchings (MM's) can be computed deterministically in O(log 4 n) rounds in the synchronous, messagepassing model of computation. This improves substantially on an earlier result by the present authors, which shows that MM's can be computed in O(log 7 n) many rounds [9]. This rather substantial improvement in asymptotics is based on several new algorithmic ideas that, we hope, might prove useful in other conte...