Results 11  20
of
49
GRAPH SEARCHING WITH ADVICE
, 2007
"... Fraigniaud et al. (2006) introduced a new measure of difficulty for a distributed task in a network. The smallest number of bits of advice of a distributed problem is the smallest number of bits of information that has to be available to nodes in order to accomplish the task efficiently. Our paper ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Fraigniaud et al. (2006) introduced a new measure of difficulty for a distributed task in a network. The smallest number of bits of advice of a distributed problem is the smallest number of bits of information that has to be available to nodes in order to accomplish the task efficiently. Our paper deals with the number of bits of advice required to perform efficiently the graph searching problem in a distributed setting. In this variant of the problem, all searchers are initially placed at a particular node of the network. The aim of the team of searchers is to capture an invisible and arbitrarily fast fugitive in a monotone connected way, i.e., the cleared part of the graph is permanently connected, and never decreases while the search strategy is executed. We show that the minimum number of bits of advice permitting the monotone connected clearing of a network in a distributed setting is O(n log n), where n is the number of nodes of the network, and this bound is tight. More precisely, we first provide a labelling of the vertices of any graph G, using a total of O(n log n) bits, and a protocol using this labelling that enables clearing G in a monotone connected distributed way. Then, we show that this number of bits of advice is almost optimal: no protocol using an oracle providing o(n log n) bits of advice permits the monotone connected clearing of a network using the smallest number of searchers.
Distributed computing with adaptive heuristics
 In Proceedings of Innovations in Computer Science ICS
, 2011
"... Abstract: We use ideas from distributed computing to study dynamic environments in which computational nodes, or decision makers, follow adaptive heuristics [16], i.e., simple and unsophisticated rules of behavior, e.g., repeatedly “best replying ” to others ’ actions, and minimizing “regret”, that ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Abstract: We use ideas from distributed computing to study dynamic environments in which computational nodes, or decision makers, follow adaptive heuristics [16], i.e., simple and unsophisticated rules of behavior, e.g., repeatedly “best replying ” to others ’ actions, and minimizing “regret”, that have been extensively studied in game theory and economics. We explore when convergence of such simple dynamics to an equilibrium is guaranteed in asynchronous computational environments, where nodes can act at any time. Our research agenda, distributed computing with adaptive heuristics, lies on the borderline of computer science (including distributed computing and learning) and game theory (including game dynamics and adaptive heuristics). We exhibit a general nontermination result for a broad class of heuristics with bounded recall—that is, simple rules of behavior that depend only on recent history of interaction between nodes. We consider implications of our result across a wide variety of interesting and timely applications: game theory, circuit design, social networks, routing and congestion control. We also study the computational and communication complexity of asynchronous dynamics and present some basic observations regarding the effects of asynchrony on noregret dynamics. We believe that our work opens a new avenue for research in both distributed computing and game theory.
Communication Algorithms with Advice
, 2009
"... We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, reg ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, regardless of the type of information provided. We compare the size of advice needed to perform broadcast and wakeup (the latter is a broadcast in which nodes can transmit only after getting the source information), both using a linear number of messages (which is optimal). We show that the minimum size of advice permitting the wakeup with a linear number of messages in a nnode network, is Θ(nlog n), while the broadcast with a linear number of messages can be achieved with advice of size O(n). We also show that the latter size of advice is almost optimal: no advice of size o(n) can permit to broadcast with a linear number of messages. Thus an
Tradeoffs between the size of advice and broadcasting time in trees
 IN SPAA
, 2008
"... We study the problem of the amount of information required to perform fast broadcasting in tree networks. The source located at the root of a tree has to disseminate a message to all nodes. In each round each informed node can transmit to one child. Nodes do not know the topology of the tree but an ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
We study the problem of the amount of information required to perform fast broadcasting in tree networks. The source located at the root of a tree has to disseminate a message to all nodes. In each round each informed node can transmit to one child. Nodes do not know the topology of the tree but an oracle knowing it can give a string of bits of advice to the source which can then pass it down the tree with the source message. The quality of a broadcasting algorithm with advice is measured by its competitive ratio: the worst case ratio, taken over nnode trees, between the time of this algorithm and the optimal broadcasting time in the given tree. Our goal is to find a tradeoff between the size of advice and the best competitive ratio of a broadcasting algorithm for nnode trees. We establish such a tradeoff with an approximation factor of O(n ɛ), for an arbitrarily small
Local Algorithms for Dominating and Connected Dominating Sets of Unit Disk Graphs
, 2007
"... Many protocols in distributed computing make use of dominating and connected dominating sets, for example for broadcasting and the computation of routing. Ad hoc networks impose an additional requirement that algorithms for the construction of such sets should be local in the sense that each node of ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Many protocols in distributed computing make use of dominating and connected dominating sets, for example for broadcasting and the computation of routing. Ad hoc networks impose an additional requirement that algorithms for the construction of such sets should be local in the sense that each node of the network should make decisions based only on the information obtained from nodes located a constant (independent of the size of the network) number of steps away from it. The focus of the present paper is on providing local, constant approximation, deterministic algorithms for the construction of dominating and connected dominating sets of a Unit Disk Graph (UDG) with location aware nodes (i.e., nodes that know their coordinates in the plane). The size of the constructed set, in the case of the dominating set, is shown to be 5 times the optimal, while for the connected dominating set 7.453 + ɛ the optimal, for any arbitrarily small ɛ> 0. These are the first local algorithms in the scientific literature whose time
Lower and upper bounds for distributed packing and covering
, 2004
"... We make a step towards understanding the distributed complexity of global optimization problems. We give bounds on the tradeoff between locality and achievable approximation ratio of distributed algorithms for packing and covering problems. Extending a result of [9], we show that in k communication ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We make a step towards understanding the distributed complexity of global optimization problems. We give bounds on the tradeoff between locality and achievable approximation ratio of distributed algorithms for packing and covering problems. Extending a result of [9], we show that in k communication rounds, maximum matching and therefore packing problems cannot be approximated better than Ω(nc/k2/k) and Ω(∆1/k /k) where c is a small constant and n and ∆ denote the number of nodes and the maximum degree of the network graph, respectively. This means that in order to obtain a constant or polylogarithmic approximation, there are graphs with n nodes and graphs with maximum degree ∆ on which Ω ( √ log n / log log n) and Ω(log ∆ / log log ∆) rounds are needed, respectively. On the positive side, we prove that maximum matching and minimum vertex cover (the dual problem) can be approximated by O(∆1/k) in O(k) rounds, showing that the given lower bound is almost tight. We also give a distributed algorithm which approximates any packing or covering LP by O(n1/k) in O(k) rounds. 1
Temporal logics and model checking for fairly correct systems
 In Proc. 21st Ann. Symp. Logic in Computer Science (LICS’06
, 2006
"... We motivate and study a generic relaxation of correctness of reactive and concurrent systems with respect to a temporal specification. We define a system to be fairly correct if there exists a fairness assumption under which it satisfies its specification. Equivalently, a system is fairly correct if ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We motivate and study a generic relaxation of correctness of reactive and concurrent systems with respect to a temporal specification. We define a system to be fairly correct if there exists a fairness assumption under which it satisfies its specification. Equivalently, a system is fairly correct if the set of runs satisfying the specification is large from a topological point of view, i.e., it is a comeager set. We compare topological largeness with its more popular sibling, probabilistic largeness, where a specification is probabilistically large if the set of runs satisfying the specification has probability 1. We show that topological and probabilistic largeness of ωregular specifications coincide for bounded Borel measures on finitestate systems. As a corollary, we show that, for specifications expressed in LTL or by Büchi automata, checking that a finitestate system is fairly correct has the same complexity as checking that it is correct. Finally we study variants of the logics CTL and CTL*, where the ‘for all runs ’ quantifier is replaced by a ‘for a large set of runs ’ quantifier. We show that the model checking complexity for these variants is the same as for the original logics. 1
Tree Exploration with Advice
, 2008
"... We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions conc ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions concerned availability of particular items of information about the network, such as the size, the diameter, or a map of the network. In contrast, our approach is quantitative: we investigate the minimum number of bits of information (bits of advice) that has to be given to an algorithm in order to perform a task with given efficiency. We illustrate this quantitative approach to available knowledge by the task of tree exploration. A mobile entity (robot) has to traverse all edges of an unknown tree, using as few edge traversals as possible. The quality of an exploration algorithm A is measured by its competitive ratio, i.e., by comparing its cost (number of edge traversals) to the length of the shortest path containing all edges of the tree. DepthFirstSearch has competitive ratio 2 and, in the absence of any information about
On the Inherent Weakness of Conditional Synchronization Primitives
 In Proceedings of the 23rd Annual ACM Symposium on Principles of Distributed Computing
, 2004
"... The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus c ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
The “waitfree hierarchy ” classifies multiprocessor synchronization primitives according to their power to solve consensus. The classification is based on assigning a number n to each synchronization primitive, where n is the maximal number of processes for which deterministic waitfree consensus can be solved using instances of the primitive and read write registers. Conditional synchronization primitives, such as compareandswap and loadlinked/storeconditional, can implement deterministic waitfree consensus for any number of processes (they have consensus number ∞), and are thus considered to be among the strongest synchronization primitives. To some extent because of that, compareandswap and loadlinked/storeconditional have became the synchronization primitives of choice, and have been implemented in hardware in many multiprocessor architectures. This paper shows that, though they are strong in the context of consensus, conditional synchronization primitives are not efficient in terms of memory space for implementing many key objects. Our results hold for starvationfree implementations of mutual exclusion, and for waitfree implementations of a large class of concurrent objects, that we call Visible(n). Roughly, Visible(n) is a class that includes all objects that support some operation that must perform a “visible”
Lower bounds for adaptive collect and related objects
 In Proc. 23 Annual ACM Symp. on Principles of Distributed Computing
, 2004
"... An adaptive algorithm, whose step complexity adjusts to the number of active processes, is attractive for situations in which the number of participating processes is highly variable. This paper studies the number and type of multiwriter registers that are needed for adaptive algorithms. We prove th ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
An adaptive algorithm, whose step complexity adjusts to the number of active processes, is attractive for situations in which the number of participating processes is highly variable. This paper studies the number and type of multiwriter registers that are needed for adaptive algorithms. We prove that if a collect algorithm is fadaptive to total contention, namely, its step complexity is f(k), where k is the number of processes that ever tooka step, then it uses Ω(f −1 (n)) multiwriter registers, where n is the total number of processes in the system. Furthermore, we show that competition for the underlying registers is inherent for adaptive collect algorithms. We consider cwrite registers, to which at most c processes can be concurrently about to write. Special attention is given to exclusivewrite registers, the case c = 1 where no competition is allowed, and concurrentwrite registers, the case c = n where any amount of competition is allowed. A collect algorithm is fadaptive to point contention, if its step complexity is f(k), where k is the maximum number of simultaneously active processes. Such an algorithm is shown to require Ω(f −1 ( n c)) concurrentwrite registers, even if an unlimited number of cwrite registers are available. A smaller lower bound is also obtained in this situation for collect algorithms that are fadaptive to total contention. The lower bounds also hold for nondeterministic implementations of sensitive objects from historyless objects. Finally, we present lower bounds on the step complexity in solo executions (i.e., without any contention), when only cwrite registers are used: For weaktest&set objects, we log n present an Ω() lower bound. Our lower bound log c+log log n for collect and sensitive objects is Ω ( n−1 c).