Results 1  10
of
58
GraphTheoretic Analysis of Structured PeertoPeer Systems: Routing Distances and Fault Resilience
, 2003
"... This paper examines graphtheoretic properties of existing peertopeer architectures and proposes a new infrastructure based on optimaldiameter de Bruijn graphs. Since generalized de Bruijn graphs possess very short average routing distances and high resilience to node failure, they are well suite ..."
Abstract

Cited by 105 (7 self)
 Add to MetaCart
This paper examines graphtheoretic properties of existing peertopeer architectures and proposes a new infrastructure based on optimaldiameter de Bruijn graphs. Since generalized de Bruijn graphs possess very short average routing distances and high resilience to node failure, they are well suited for structured peertopeer networks. Using the example of Chord, CAN, and de Bruijn, we first study routing performance, graph expansion, and clustering properties of each graph. We then examine bisection width, path overlap, and several other properties that affect routing and resilience of peertopeer networks. Having confirmed that de Bruijn graphs offer the best diameter and highest connectivity among the existing peertopeer structures, we offer a very simple incremental building process that preserves optimal properties of de Bruijn graphs under uniform user joins/departures. We call the combined peertopeer architecture
Complex Networks and Decentralized Search Algorithms
 In Proceedings of the International Congress of Mathematicians (ICM
, 2006
"... The study of complex networks has emerged over the past several years as a theme spanning many disciplines, ranging from mathematics and computer science to the social and biological sciences. A significant amount of recent work in this area has focused on the development of random graph models that ..."
Abstract

Cited by 73 (1 self)
 Add to MetaCart
The study of complex networks has emerged over the past several years as a theme spanning many disciplines, ranging from mathematics and computer science to the social and biological sciences. A significant amount of recent work in this area has focused on the development of random graph models that capture some of the qualitative properties observed in largescale network data; such models have the potential to help us reason, at a general level, about the ways in which realworld networks are organized. We survey one particular line of network research, concerned with smallworld phenomena and decentralized search algorithms, that illustrates this style of analysis. We begin by describing a wellknown experiment that provided the first empirical basis for the "six degrees of separation" phenomenon in social networks; we then discuss some probabilistic network models motivated by this work, illustrating how these models lead to novel algorithmic and graphtheoretic questions, and how they are supported by recent empirical studies of large social networks.
Hybrid search schemes for unstructured peertopeer networks
 In Proceedings of IEEE INFOCOM
, 2005
"... Abstract — We study hybrid search schemes for unstructured peertopeer networks. We quantify performance in terms of number of hits, network overhead, and response time. Our schemes combine flooding and random walks, look ahead and replication. We consider both regular topologies and topologies wit ..."
Abstract

Cited by 72 (1 self)
 Add to MetaCart
Abstract — We study hybrid search schemes for unstructured peertopeer networks. We quantify performance in terms of number of hits, network overhead, and response time. Our schemes combine flooding and random walks, look ahead and replication. We consider both regular topologies and topologies with supernodes. We introduce a general search scheme, of which flooding and random walks are special instances, and show how to use locally maintained network information to improve the performance of searching. Our main findings are: (a)A small number of supernodes in an otherwise regular topology can offer sharp savings in the performance of search, both in the case of search by flooding and search by random walk, particularly when it is combined with 1step replication. We quantify, analytically and experimentally, that the reason of these savings is that the search is biased towards nodes that yield more information. (b)There is a generalization of search, of which flooding and random walk are special instances, which may take further advantage of locally maintained network information, and yield better performance than both flooding and random walk in clustered topologies. The method determines edge criticality and is reminiscent of fundamental heuristics from the area of approximation algorithms. I.
Distance Estimation and Object Location via Rings of Neighbors
 In 24 th Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2005
"... We consider four problems on distance estimation and object location which share the common flavor of capturing global information via informative node labels: lowstretch routing schemes [47], distance labeling [24], searchable small worlds [30], and triangulationbased distance estimation [33]. Fo ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
We consider four problems on distance estimation and object location which share the common flavor of capturing global information via informative node labels: lowstretch routing schemes [47], distance labeling [24], searchable small worlds [30], and triangulationbased distance estimation [33]. Focusing on metrics of low doubling dimension, we approach these problems with a common technique called rings of neighbors, which refers to a sparse distributed data structure that underlies all our constructions. Apart from improving the previously known bounds for these problems, our contributions include extending Kleinberg’s small world model to doubling metrics, and a short proof of the main result in Chan et al. [14]. Doubling dimension is a notion of dimensionality for general metrics that has recently become a useful algorithmic concept in the theoretical computer science literature. 1
Minimizing churn in distributed systems
, 2006
"... A pervasive requirement of distributed systems is to deal with churn — change in the set of participating nodes due to joins, graceful leaves, and failures. A high churn rate can increase costs or decrease service quality. This paper studies how to reduce churn by selecting which subset of a set of ..."
Abstract

Cited by 58 (5 self)
 Add to MetaCart
A pervasive requirement of distributed systems is to deal with churn — change in the set of participating nodes due to joins, graceful leaves, and failures. A high churn rate can increase costs or decrease service quality. This paper studies how to reduce churn by selecting which subset of a set of available nodes to use. First, we provide a comparison of the performance of a range of different node selection strategies in five realworld traces. Among our findings is that the simple strategy of picking a uniformrandom replacement whenever a node fails performs surprisingly well. We explain its performance through analysis in a stochastic model. Second, we show that a class of strategies, which we call “Preference List ” strategies, arise commonly as a result of optimizing for a metric other than churn, and produce high churn relative to more randomized strategies under realistic node failure patterns. Using this insight, we demonstrate and explain differences in performance for designs that incorporate varying degrees of randomization. We give examples from a variety of protocols, including anycast, overlay multicast, and distributed hash tables. In many cases, simply adding some randomization can go a long way towards reducing churn.
Virtual Coordinates for Ad hoc and Sensor Networks
, 2004
"... In many applications of wireless ad hoc and sensor networks, positionawareness is of great importance. Often, as in the case of geometric routing, it is sufficient to have virtual coordinates, rather than real coordinates. In this paper, we address the problem of obtaining virtual coordinates based ..."
Abstract

Cited by 48 (9 self)
 Add to MetaCart
In many applications of wireless ad hoc and sensor networks, positionawareness is of great importance. Often, as in the case of geometric routing, it is sufficient to have virtual coordinates, rather than real coordinates. In this paper, we address the problem of obtaining virtual coordinates based on connectivity information. In particular, we propose the first approximation algorithm for this problem and discuss implementational aspects.
Know thy Neighbor's Neighbor: Better Routing for SkipGraphs and Small Worlds
 in Proc. of IPTPS, 2004
, 2004
"... We investigate an approach for routing in p2p networks called neighborofneighbor greedy. We show that this approach may reduce significantly the number of hops used, when routing in skip graphs and small worlds. Furthermore we show that a simple variation of Chord is degree optimal. Our algorithm ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We investigate an approach for routing in p2p networks called neighborofneighbor greedy. We show that this approach may reduce significantly the number of hops used, when routing in skip graphs and small worlds. Furthermore we show that a simple variation of Chord is degree optimal. Our algorithm is implemented on top of the conventional greedy algorithms, thus it maintains the good properties of greedy routing. Implementing it may only improve the performance of the system.
A doubling dimension threshold Θ(log log n) for augmented graph navigability
 In 14th European Symposium on Algorithm (ESA), LNCS 4168
, 2006
"... Abstract. In his seminal work, Kleinberg showed how to augment meshes using random edges, so that they become navigable; that is, greedy routing computes paths of polylogarithmic expected length between any pairs of nodes. This yields the crucial question of determining wether such an augmentation i ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Abstract. In his seminal work, Kleinberg showed how to augment meshes using random edges, so that they become navigable; that is, greedy routing computes paths of polylogarithmic expected length between any pairs of nodes. This yields the crucial question of determining wether such an augmentation is possible for all graphs. In this paper, we answer negatively to this question by exhibiting a threshold on the doubling dimension, above which an infinite family of graphs cannot be augmented to become navigable whatever the distribution of random edges is. Precisely, it was known that graphs of doubling dimension at most O(log log n) are navigable. We show that for doubling dimension ≫ log log n, an infinite family of graphs cannot be augmented to become navigable. Finally, we complete our result by studying the special case of square meshes, that we prove to always be augmentable to become navigable.
On small world graphs in nonuniformly distributed key spaces
 In Proceedings of the 21st International Conference on Data Engineering Workshops (ICDEW
, 2005
"... In this paper we show that the topologies of most logarithmicstyle P2P systems like Pastry, Tapestry or PGrid resemble smallworld graphs. Inspired by Kleinberg’s smallworld model [6] we extend the model of building “routingefficient ” smallworld graphs and propose two new models. We show that ..."
Abstract

Cited by 15 (11 self)
 Add to MetaCart
In this paper we show that the topologies of most logarithmicstyle P2P systems like Pastry, Tapestry or PGrid resemble smallworld graphs. Inspired by Kleinberg’s smallworld model [6] we extend the model of building “routingefficient ” smallworld graphs and propose two new models. We show that the graph, constructed according to our model for uniform key distribution and logarithmic outdegree, will have similar properties as the topologies of structured P2P systems with logarithmic outdegree. Moreover, we propose a novel model of building graphs which support uneven node distributions and preserves all desired properties of Kleinberg’s smallworld model. With such a model we are setting a reference base for nowadays emerging P2P systems that need to support uneven key distributions.
Skipwebs: Efficient distributed data structures for multidimensional data sets
 In 24th ACM Symp. on Principles of Distributed Computing (PODC
, 2005
"... large(at)daimi.au.dk eppstein(at)ics.uci.edu goodrich(at)acm.org We present a framework for designing efficient distributed data structures for multidimensional data. Our structures, which we call skipwebs, extend and improve previous randomized distributed data structures, including skipnets and ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
large(at)daimi.au.dk eppstein(at)ics.uci.edu goodrich(at)acm.org We present a framework for designing efficient distributed data structures for multidimensional data. Our structures, which we call skipwebs, extend and improve previous randomized distributed data structures, including skipnets and skip graphs. Our framework applies to a general class of data querying scenarios, which include linear (onedimensional) data, such as sorted sets, as well as multidimensional data, such as ddimensional octrees and digital tries of character strings defined over a fixed alphabet. We show how to perform a query over such a set of n items spread among n hosts using O(log n/log log n) messages for onedimensional data, or O(log n) messages for fixeddimensional data, while using only O(log n) space per host. We also show how to make such structures dynamic so as to allow for insertions and deletions in O(log n) messages for quadtrees, octrees, and digital tries, and O(log n/log log n) messages for onedimensional data. Finally, we show how to apply a blocking strategy to skipwebs to further improve message complexity for onedimensional data when hosts can store more data.