Results 1  10
of
58
GossipBased Computation of Aggregate Information
, 2003
"... between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossipbased protocols are emergin ..."
Abstract

Cited by 297 (1 self)
 Add to MetaCart
between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossipbased protocols are emerging as an approach to maintaining simplicity and scalability while achieving faulttolerant information dissemination.
Finding the Hidden Path: Time Bounds for AllPairs Shortest Paths
, 1993
"... We investigate the allpairs shortest paths problem in weighted graphs. We present an algorithmthe Hidden Paths Algorithmthat finds these paths in time O(m* n+n² log n), where m is the number of edges participating in shortest paths. Our algorithm is a practical substitute for Dijkstra's ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
We investigate the allpairs shortest paths problem in weighted graphs. We present an algorithmthe Hidden Paths Algorithmthat finds these paths in time O(m* n+n² log n), where m is the number of edges participating in shortest paths. Our algorithm is a practical substitute for Dijkstra's algorithm. We argue that m* is likely to be small in practice, since m* = O(n log n) with high probability for many probability distributions on edge weights. We also prove an Ω(mn) lower bound on the running time of any pathcomparison based algorithm for the allpairs shortest paths problem. Pathcomparison based algorithms form a natural class containing the Hidden Paths Algorithm, as well as the algorithms of Dijkstra and Floyd. Lastly, we consider generalized forms of the shortest paths problem, and show that many of the standard shortest paths algorithms are effective in this more general setting.
Nearlinear time construction of sparse neighborhood covers
 SIAM Journal on Computing
, 1998
"... Abstract. This paper introduces a nearlinear time sequential algorithm for constructing a sparse neighborhood cover. This implies analogous improvements (from quadratic to nearlinear time) for any problem whose solution relies on network decompositions, including small edge cuts in planar graphs, ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
Abstract. This paper introduces a nearlinear time sequential algorithm for constructing a sparse neighborhood cover. This implies analogous improvements (from quadratic to nearlinear time) for any problem whose solution relies on network decompositions, including small edge cuts in planar graphs, approximate shortest paths, and weight and distancepreserving graph spanners. In particular, an O(log n) approximation to the kshortest paths problem on an nvertex, Eedge graph is obtained that runs in Õ (n + E + k) time.
Fast Approximation of Centrality
 Journal of Graph Algorithms and Applications
, 2001
"... Social studies researchers use graphs to model group activities in social networks. An important property in this context is the centrality of a vertex: the inverse of the average distance to each other vertex. We describe a randomized approximation algorithm for centrality in weighted graphs. For g ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
Social studies researchers use graphs to model group activities in social networks. An important property in this context is the centrality of a vertex: the inverse of the average distance to each other vertex. We describe a randomized approximation algorithm for centrality in weighted graphs. For graphs exhibiting the small world phenomenon, our method estimates the centrality of all vertices with high probability within a (1 + #) factor in nearlinear time. 1 Introduction In social network analysis, the vertices of a graph represent agents in a group and the edges represent relationships, such as communication or friendship. The idea of applying graph theory to analyze the connection between the structural centrality and group process was introduced by Bavelas [4]. Various measurement of centrality [7, 14, 15] have been proposed for analyzing communication activity, control, or independence within a social network. We are particularly interested in closeness centrality [5, 6, 24]...
SingleSource ShortestPaths on Arbitrary Directed Graphs in Linear AverageCase Time
 In Proc. 12th ACMSIAM Symposium on Discrete Algorithms
, 2001
"... The quest for a lineartime singlesource shortestpath (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an O(n + m) time RAM algorithm for undirected graphs with n nodes, m edges and integer edge weights in f0; : : : ; 2 w ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
The quest for a lineartime singlesource shortestpath (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an O(n + m) time RAM algorithm for undirected graphs with n nodes, m edges and integer edge weights in f0; : : : ; 2 w 1g where w denotes the word length, the currently best time bound for directed sparse graphs on a RAM is O(n + m log log n). In the present paper we study the averagecase complexity of SSSP. We give a simple algorithm for arbitrary directed graphs with random edge weights uniformly distributed in [0; 1] and show that it needs linear time O(n + m) with high probability. 1 Introduction The singlesource shortestpath problem (SSSP) is a fundamental and wellstudied combinatorial optimization problem with many practical and theoretical applications [1]. Let G = (V; E) be a directed graph, jV j = n, jEj = m, let s be a distinguished vertex of the graph, and c be a function assigning a n...
Fast distributed algorithms for computing separable functions
 IEEE Trans. Inform. Theory
"... Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peertopeer, and adhoc networks. The task of computing separable f ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peertopeer, and adhoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete fully distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme. Index Terms—Data aggregation, distributed algorithms, gossip algorithms, randomized algorithms. I.
A Parallelization of Dijkstra's Shortest Path Algorithm
 IN PROC. 23RD MFCS'98, LECTURE NOTES IN COMPUTER SCIENCE
, 1998
"... The single source shortest path (SSSP) problem lacks parallel solutions which are fast and simultaneously workefficient. We propose simple criteria which divide Dijkstra's sequential SSSP algorithm into a number of phases, such that the operations within a phase can be done in parallel. We give a P ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
The single source shortest path (SSSP) problem lacks parallel solutions which are fast and simultaneously workefficient. We propose simple criteria which divide Dijkstra's sequential SSSP algorithm into a number of phases, such that the operations within a phase can be done in parallel. We give a PRAM algorithm based on these criteria and analyze its performance on random digraphs with random edge weights uniformly distributed in [0, 1]. We use
Algorithmic Theory of Random graphs
, 1997
"... The theory of random graphs has been mainly concerned with structural properties, in particular the most likely values of various graph invariants  see Bollob`as [21]. There has been increasing interest in using random graphs as models for the average case analysis of graph algorithms. In this pap ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
The theory of random graphs has been mainly concerned with structural properties, in particular the most likely values of various graph invariants  see Bollob`as [21]. There has been increasing interest in using random graphs as models for the average case analysis of graph algorithms. In this paper we survey some of the results in this area. 1 Introduction The theory of random graphs as initiated by Erdos and R'enyi [52] and developed along with others, has been mainly concerned with structural properties, in particular the most likely values of various graph invariantss  see Bollob`as [21]. There has been increasing interest in using random graphs as models for the average case analysis of graph algorithms. We would like in this paper to survey some of the results in this area. We hope to be fairly comprehensive in terms of the areas we tackle and so depth will be sacrificed in favour of breadth. One attractive feature of average case analysis is that it banishes the pessimism o...
Gossiping with multiple messages
 In INFOCOM
, 2007
"... Abstract — This paper investigates the dissemination of multiple pieces of information in large networks where users contact each other in a random uncoordinated manner, and users upload one piece per unit time. The underlying motivation is the design and analysis of piece selection protocols for pe ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Abstract — This paper investigates the dissemination of multiple pieces of information in large networks where users contact each other in a random uncoordinated manner, and users upload one piece per unit time. The underlying motivation is the design and analysis of piece selection protocols for peertopeer networks which disseminate files by dividing them into pieces. We first investigate onesided protocols, where piece selection is based on the states of either the transmitter or the receiver. We show that any such protocol relying only on pushes, or alternatively only on pulls, will be inefficient in disseminating all pieces to all users. We propose a hybrid onesided piece selection protocol – INTERLEAVE – and show that by using both pushes and pulls it disseminates k pieces from a single source to n users in 10(k + log n) time, while obeying the constraint that each user can upload at most one piece in one unit of time. An optimal, unrealistic centralized protocol would take k+log 2 n time in this setting. Moreover, efficient dissemination is also possible if the source implements forward erasure coding, and users push the latestreleased coded pieces (but do not pull). We also investigate twosided protocols where piece selection is based on the states of both the trasmitter and the receiver. We show that it is possible to disseminate n pieces to n users in n + O(log n) time, starting from an initial state where each user has a unique piece. I.
Quasirandom Rumor Spreading
 In Proc. of SODA’08
, 2008
"... We propose and analyse a quasirandom analogue to the classical push model for disseminating information in networks (“randomized rumor spreading”). In the classical model, in each round each informed node chooses a neighbor at random and informs it. Results of Frieze and Grimmett (Discrete Appl. Mat ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
We propose and analyse a quasirandom analogue to the classical push model for disseminating information in networks (“randomized rumor spreading”). In the classical model, in each round each informed node chooses a neighbor at random and informs it. Results of Frieze and Grimmett (Discrete Appl. Math. 1985) show that this simple protocol succeeds in spreading a rumor from one node of a complete graph to all others within O(log n) rounds. For the network being a hypercube or a random graph G(n, p) with p ≥ (1+ε)(log n)/n, also O(log n) rounds suffice (Feige, Peleg, Raghavan, and Upfal, Random Struct. Algorithms 1990). In the quasirandom model, we assume that each node has a (cyclic) list of its neighbors. Once informed, it starts at a random position of the list, but from then on informs its neighbors in the order of the list. Surprisingly, irrespective of the orders of the lists, the above mentioned bounds still hold. In addition, we also show a O(log n) bound for sparsely connected random graphs G(n, p) with p = (log n+f(n))/n, where f(n) → ∞ and f(n) = O(log log n). Here, the classical model needs Θ(log 2 (n)) rounds. Hence the quasirandom model achieves similar or better broadcasting times with a greatly reduced use of random bits.