Results 1 - 10
of
137
Novel Architectures for P2P Applications: the Continuous-Discrete Approach
- ACM TRANSACTIONS ON ALGORITHMS
, 2007
"... We propose a new approach for constructing P2P networks based on a dynamic decomposition of a continuous space into cells corresponding to processors. We demonstrate the power of these design rules by suggesting two new architectures, one for DHT (Distributed Hash Table) and the other for dynamic ex ..."
Abstract
-
Cited by 166 (8 self)
- Add to MetaCart
(Show Context)
We propose a new approach for constructing P2P networks based on a dynamic decomposition of a continuous space into cells corresponding to processors. We demonstrate the power of these design rules by suggesting two new architectures, one for DHT (Distributed Hash Table) and the other for dynamic expander networks. The DHT network, which we call Distance Halving, allows logarithmic routing and load, while preserving constant degrees. Our second construction builds a network that is guaranteed to be an expander. The resulting topologies are simple to maintain and implement. Their simplicity makes it easy to modify and add protocols. We show it is possible to reduce the dilation and the load of the DHT with a small increase of the degree. We present a provably good protocol for relieving hot spots and a construction with high fault tolerance. Finally we show that, using our approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worst parameters. Therefore we expect that more distributed data structures could be designed and implemented in a dynamic environment.
BALANCED ALLOCATIONS: THE HEAVILY LOADED CASE
, 2006
"... We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selec ..."
Abstract
-
Cited by 72 (9 self)
- Add to MetaCart
We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selected bins. It is known that in many scenarios having more than one choice for each ball can improve the load balance significantly. Formal analyses of this phenomenon prior to this work considered mostly the lightly loaded case, that is, when m ≈ n. In this paper we present the first tight analysis in the heavily loaded case, that is, when m ≫ n rather than m ≈ n. The best previously known results for the multiple-choice processes in the heavily loaded case were obtained using majorization by the single-choice process. This yields an upper bound of the maximum load of bins of m/n + O ( √ m ln n/n) with high probability. We show, however, that the multiple-choice processes are fundamentally different from the single-choice variant in that they have “short memory. ” The great consequence of this property is that the deviation of the multiple-choice processes from the optimal allocation (that is, the allocation in which each bin has either ⌊m/n ⌋ or ⌈m/n ⌉ balls) does not increase with the number of balls as in the case of the single-choice process. In particular, we investigate the allocation obtained by two different multiple-choice allocation schemes,
A stochastic process on the hypercube with applications to peer-to-peer networks
- Proc. STOC 2003
"... Consider the following stochastic process executed on a graph G = (V, E) whose nodes are initially uncovered. In each step, pick a node at random and if it is uncovered, cover it. Otherwise, if it has an uncovered neighbor, cover a random uncovered neighbor. Else, do nothing. This can be viewed as a ..."
Abstract
-
Cited by 67 (2 self)
- Add to MetaCart
(Show Context)
Consider the following stochastic process executed on a graph G = (V, E) whose nodes are initially uncovered. In each step, pick a node at random and if it is uncovered, cover it. Otherwise, if it has an uncovered neighbor, cover a random uncovered neighbor. Else, do nothing. This can be viewed as a structured coupon collector process. We show that for a large family of graphs, O(n) steps suffice to cover all nodes of the graph with high probability, where n is the number of vertices. Among these graphs are d-regular graphs with d = Ω(log n log log n), random d-regular graphs with d = Ω(log n) and the k-dimensional hypercube where n = 2 k. This process arises naturally in answering a question on load balancing in peer-to-peer networks. We consider a distributed hash table in which keys are partitioned across a set of processors, and we assume that the number of processors
Push-to-peer video-on-demand system: Design and evaluation
- In UMass Computer Science Techincal Report 2006–59
, 2006
"... Number: CR-PRL-2006-11-0001 ..."
(Show Context)
Approximate Equilibria and Ball Fusion
- Theory of Computing Systems
, 2002
"... We consider sel sh routing over a network consisting of m parallel links through which n sel sh users route their tra c trying to minimize their own expected latency. Westudy the class of mixed strategies in which the expected latency through each link is at most a constant multiple of the optimum m ..."
Abstract
-
Cited by 59 (23 self)
- Add to MetaCart
(Show Context)
We consider sel sh routing over a network consisting of m parallel links through which n sel sh users route their tra c trying to minimize their own expected latency. Westudy the class of mixed strategies in which the expected latency through each link is at most a constant multiple of the optimum maximum latency had global regulation been available. For the case of uniform links it is known that all Nash equilibria belong to this class of strategies. We areinterested in bounding the coordination ratio (or price of anarchy) of these strategies de ned as the worst-case ratio of the maximum (over all links) expected latency over the optimum maximum latency. The load balancing aspect of the problem immediately implies a lower bound; lnm ln lnm of the coordination ratio. We give a tight (uptoamultiplicative constant) upper bound. To show the upper bound, we analyze a variant ofthe classical balls and bins problem, in which balls with arbitrary weights are placed into bins according to arbitrary probability distributions. At the heart of our approach is a new probabilistic tool that we call
A generic scheme for building overlay networks in adversarial scenarios
- In Proc. Intl. Parallel and Distributed Processing Symp
, 2003
"... This paper presents a generic scheme for a central, yet untackled issue in overlay dynamic networks: maintaining stability over long life and against malicious adversaries. The generic scheme maintains desirable properties of the underlying structure including low diameter, and efficient routing mec ..."
Abstract
-
Cited by 52 (6 self)
- Add to MetaCart
(Show Context)
This paper presents a generic scheme for a central, yet untackled issue in overlay dynamic networks: maintaining stability over long life and against malicious adversaries. The generic scheme maintains desirable properties of the underlying structure including low diameter, and efficient routing mechanism, as well as balanced node dispersal. These desired properties are maintained in a decentralized manner without resorting to global updates or periodic stabilization protocols even against an adaptive adversary that controls the arrival and departure of nodes. 1
Path selection and multipath congestion control.
- In INFOCOM07,
, 2007
"... ABSTRACT In this paper we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as ..."
Abstract
-
Cited by 40 (2 self)
- Add to MetaCart
(Show Context)
ABSTRACT In this paper we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as a function of all paths, and uncoordinated control where the rates are determined independently over each path. We show that coordinated control exhibits desirable load balancing properties; for a homogeneous static random paths scenario, we show that the worst-case throughput performance of uncoordinated control behaves as if each user has but a single path (scaling like log(log(N ))/ log(N ) where N is the system size, measured in number of resources). Whereas coordinated control yields a worst-case throughput allocation bounded away from zero. We then allow users to change their set of paths and introduce the notion of a Nash equilibrium. We show that both coordinated and uncoordinated control lead to Nash equilibria corresponding to desirable welfare maximizing states, provided in the latter case, the rate controllers over each path do not exhibit any RTT bias (as in TCP Reno). Finally, we show in the case of coordinated control that more paths are better, leading to greater welfare states and throughput capacity, and that simple path reselection polices that shift to paths with higher net benefit can achieve these states.
Distributed selfish load balancing
, 2006
"... Suppose that a set of m tasks are to be shared as equally as possible amongst a set of n resources. A game-theoretic mechanism to find a suitable allocation is to associate each task with a “selfish agent”, and require each agent to select a resource, with the cost of a resource being the number of ..."
Abstract
-
Cited by 40 (2 self)
- Add to MetaCart
(Show Context)
Suppose that a set of m tasks are to be shared as equally as possible amongst a set of n resources. A game-theoretic mechanism to find a suitable allocation is to associate each task with a “selfish agent”, and require each agent to select a resource, with the cost of a resource being the number of agents to select it. Agents would then be expected to migrate from overloaded to underloaded resources, until the allocation becomes balanced. Recent work has studied the question of how this can take place within a distributed setting in which agents migrate selfishly without any centralized control. In this paper we discuss a natural protocol for the agents which combines the following desirable features: It can be implemented in a strongly distributed setting, uses no central control, and has good convergence properties. For m ≫ n, the system becomes approximately balanced (an ǫ-Nash equilibrium) in expected time O(log log m). We show using a martingale technique that the process converges to a perfectly balanced allocation in expected time O(log log m + n 4). We also give a lower bound of Ω(max{loglog m, n}) for the convergence time.
3D Scattered Data Approximation with Adaptive Compactly Supported Radial Basis Functions
"... In this paper, we develop an adaptive RBF fitting procedure for a high quality approximation of a set of points scattered over a piecewise smooth surface. We use compactly supported RBFs whose centers are randomly chosen from the points. The randomness is controlled by the point density and surface ..."
Abstract
-
Cited by 38 (2 self)
- Add to MetaCart
In this paper, we develop an adaptive RBF fitting procedure for a high quality approximation of a set of points scattered over a piecewise smooth surface. We use compactly supported RBFs whose centers are randomly chosen from the points. The randomness is controlled by the point density and surface geometry. For each RBF, its support size is chosen adaptively according to surface geometry at a vicinity of the RBF center. All these lead to a noise-robust high quality approximation of the set. We also adapt our basic technique for shape reconstruction from registered range scans by taking into account measurement confidences. Finally, an interesting link between our RBF fitting procedure and partition of unity approximations is established and discussed.