Results 1  10
of
51
Randomized Gossip Algorithms
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..."
Abstract

Cited by 208 (5 self)
 Add to MetaCart
Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of “gossip ” algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the socalled Preferential Connectivity (PC) model.
Random Walks in PeertoPeer Networks
, 2004
"... We quantify the effectiveness of random walks for searching and construction of unstructured peertopeer (P2P) networks. For searching, we argue that random walks achieve improvement over flooding in the case of clustered overlay topologies and in the case of reissuing the same request several tim ..."
Abstract

Cited by 177 (2 self)
 Add to MetaCart
We quantify the effectiveness of random walks for searching and construction of unstructured peertopeer (P2P) networks. For searching, we argue that random walks achieve improvement over flooding in the case of clustered overlay topologies and in the case of reissuing the same request several times. For construction, we argue that an expander can be maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk can achieve statistical properties similar to independent sampling (if the second eigenvalue of the transition matrix is bounded away from 1, which translates to good expansion of the network; such connectivity is desired, and believed to hold, in every reasonable network and network model). This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.
Statistical properties of community structure in large social and information networks
"... A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structur ..."
Abstract

Cited by 120 (10 self)
 Add to MetaCart
A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structural properties of such sets of nodes. We define the network community profile plot, which characterizes the “best ” possible community—according to the conductance measure—over a wide range of size scales, and we study over 70 large sparse realworld networks taken from a wide range of application domains. Our results suggest a significantly more refined picture of community structure in large realworld networks than has been appreciated previously. Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small scales, and at larger size scales, the best possible communities gradually “blend in ” with the rest of the network and thus become less “communitylike.” This behavior is not explained, even at a qualitative level, by any of the commonlyused network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are wellembeddable in a lowdimensional structure, and from small social networks that have served as testbeds of community detection algorithms. We have found, however, that a generative model, in which new edges are added via an iterative “forest fire” burning process, is able to produce graphs exhibiting a network community structure similar to our observations.
Community structure in large networks: Natural cluster sizes and the absence of large welldefined clusters
, 2008
"... A large body of work has been devoted to defining and identifying clusters or communities in social and information networks, i.e., in graphs in which the nodes represent underlying social entities and the edges represent some sort of interaction between pairs of nodes. Most such research begins wit ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
A large body of work has been devoted to defining and identifying clusters or communities in social and information networks, i.e., in graphs in which the nodes represent underlying social entities and the edges represent some sort of interaction between pairs of nodes. Most such research begins with the premise that a community or a cluster should be thought of as a set of nodes that has more and/or better connections between its members than to the remainder of the network. In this paper, we explore from a novel perspective several questions related to identifying meaningful communities in large social and information networks, and we come to several striking conclusions. Rather than defining a procedure to extract sets of nodes from a graph and then attempt to interpret these sets as a “real ” communities, we employ approximation algorithms for the graph partitioning problem to characterize as a function of size the statistical and structural properties of partitions of graphs that could plausibly be interpreted as communities. In particular, we define the network community profile plot, which characterizes the “best ” possible community—according to the conductance measure—over a wide range of size scales. We study over 100 large realworld networks, ranging from traditional and online social networks, to technological and information networks and
How Much Can Taxes Help Selfish Routing?
 EC'03
, 2003
"... ... in networks. We consider a model of selfish routing in which the latency experienced by network tra#c on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route tra#c on minimumlatency paths. The quality of a routing of tra#c is historically ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
... in networks. We consider a model of selfish routing in which the latency experienced by network tra#c on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route tra#c on minimumlatency paths. The quality of a routing of tra#c is historically measured by the sum of all travel times, also called the total latency. It is well known
Conductance and Congestion in Power Law Graphs
, 2003
"... It has been observed that the degrees of the topologies of several communication networks follow heavy tailed statistics. What is the impact of such heavy tailed statistics on the performance of basic communication tasks that a network is presumed to support? How does performance scale with the size ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
It has been observed that the degrees of the topologies of several communication networks follow heavy tailed statistics. What is the impact of such heavy tailed statistics on the performance of basic communication tasks that a network is presumed to support? How does performance scale with the size of the network? We study routing in families of sparse random graphs whose degrees follow heavy tailed distributions. Instantiations of such random graphs have been proposed as models for the topology of the Internet at the level of Autonomous Systems as well as at the level of routers. Let n be the number of nodes. Suppose that for each pair of nodes with degrees du and dv we have O(dudv ) units of demand. Thus the total demand is O(n ). We argue analytically and experimentally that in the considered random graph model such demand patterns can be routed so that the flow through each link is at most O . This is to be compared with a bound # that holds for arbitrary graphs. Similar results were previously known for sparse random regular graphs, a.k.a. "expander graphs." The significance is that Internetlike topologies, which grow in a dynamic, decentralized fashion and appear highly inhomogeneous, can support routing with performance characteristics comparable to those of their regular counterparts, at least under the assumption of uniform demand and capacities. Our proof uses approximation algorithms for multicommodity flow and establishes strong bounds of a generalization of "expansion," namely "conductance." Besides routing, our bounds on conductance have further implications, most notably on the gap between first and second eigenvalues of the stochastic normalization of the adjacency matrix of the graph.
On the bias of traceroute sampling: or, powerlaw degree distributions in regular graphs
 In ACM STOC
, 2005
"... Understanding the graph structure of the Internet is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining this graph structure can be a surprisingly difficult task, as edges cannot be explicitly queried. For instance, empiri ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
Understanding the graph structure of the Internet is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining this graph structure can be a surprisingly difficult task, as edges cannot be explicitly queried. For instance, empirical studies of the network of Internet Protocol (IP) addresses typically rely on indirect methods like traceroute to build what are approximately singlesource, alldestinations, shortestpath trees. These trees only sample a fraction of the network’s edges, and a recent paper by Lakhina et al. found empirically that the resulting sample is intrinsically biased. Further, in simulations, they observed that the degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson. In this paper, we study the bias of traceroute sampling mathematically and, for a very general class of underlying degree distributions, explicitly calculate the distribution that will be observed. As example applications of our machinery, we prove that traceroute sampling finds powerlaw degree distributions in both δregular and Poissondistributed random graphs. Thus, our work puts the observations of Lakhina et al. on a rigorous footing, and extends them to nearly arbitrary degree distributions.
Beyond VCG: Frugality of truthful mechanisms
 In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
, 2005
"... We study truthful mechanisms for auctions in which the auctioneer is trying to hire a team of agents to perform a complex task, and paying them for their work. As common in the field of mechanism design, we assume that the agents are selfish and will act in such a way as to maximize their profit, wh ..."
Abstract

Cited by 47 (3 self)
 Add to MetaCart
We study truthful mechanisms for auctions in which the auctioneer is trying to hire a team of agents to perform a complex task, and paying them for their work. As common in the field of mechanism design, we assume that the agents are selfish and will act in such a way as to maximize their profit, which in particular may include misrepresenting their true incurred cost. Our first contribution is a new and natural definition of the frugality ratio of a mechanism, measuring the amount by which a mechanism “overpays”, and extending previous definitions to all monopolyfree set systems. After reexamining several known results in light of this new definition, we proceed to study in detail shortest path auctions and “routofk sets ” auctions. We show that when individual set systems (e.g., graphs) are considered instead of worst cases over all instances, these problems exhibit a rich structure, and the performance of mechanisms may be vastly different. In particular, we show that the wellknown VCG mechanism may be far from optimal in these settings, and we propose and analyze a mechanism that is always within a constant factor of optimal. 1
Firstprice path auctions
 In Proc. 7th ACM Conf. on Electronic Commerce
, 2005
"... We study firstprice auction mechanisms for auctioning flow between given nodes in a graph. A firstprice auction is any auction in which links on winning paths are paid their bid amount; the designer has flexibility in specifying remaining details. We assume edges are independent agents with fixed ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
We study firstprice auction mechanisms for auctioning flow between given nodes in a graph. A firstprice auction is any auction in which links on winning paths are paid their bid amount; the designer has flexibility in specifying remaining details. We assume edges are independent agents with fixed capacities and costs, and their objective is to maximize their profit. We characterize all strong ¤Nash equilibria of a firstprice auction, and show that the total payment is never significantly more than, and often less than, the well known dominant strategy VickreyClarkGroves mechanism. We then present a randomized version of the firstprice auction for which the equilibrium condition can be relaxed to ¤Nash equilibrium. We next consider a model in which the amount of demand is uncertain, but its probability distribution is known. For this model, we show that a simple ex ante firstprice auction may not have any ¤Nash equilibria. We then present a modified mechanism with ¥parameter bids which does have an ¤Nash equilibrium. For a randomized version of this ¥parameter mechanism we characterize the set of all ¤Nash equilibria and prove a bound on the total payment in any ¤Nash equilibrium.
Rumour spreading and graph conductance
 IN PROCEEDINGS OF THE 21ST ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS (SODA
, 2010
"... We show that if a connected graph with n nodes has conductance φ then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within O(log 4 n/φ 6) many steps, with high probability, using the PUSHPULL strategy. An interesting feature of our approach is that it draws ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We show that if a connected graph with n nodes has conductance φ then rumour spreading, also known as randomized broadcast, successfully broadcasts a message within O(log 4 n/φ 6) many steps, with high probability, using the PUSHPULL strategy. An interesting feature of our approach is that it draws a connection between rumour spreading and the spectral sparsification procedure of Spielman and Teng [23].