Results 1  10
of
14
Competitive Distributed File Allocation
, 1993
"... This paper deals with the file allocation problem [BFR92] concerning the dynamic optimization of communication costs to access data in a distributed environment. We develop a dynamic file reallocation strategy that adapts online to a sequence of read and write requests whose location and relative ..."
Abstract

Cited by 106 (12 self)
 Add to MetaCart
This paper deals with the file allocation problem [BFR92] concerning the dynamic optimization of communication costs to access data in a distributed environment. We develop a dynamic file reallocation strategy that adapts online to a sequence of read and write requests whose location and relative frequencies are completely unpredictable. This is achieved by replicating the file in response to read requests and migrating the file in response to write requests while paying the associated communications costs, so as to be closer to processors that access it frequently. We develop first explicit deterministic online strategy assuming existence of global information about the state of the network; previous (deterministic) solutions were complicated and more expensive. Our solution has (optimal) logarithmic competitive ratio. The paper also contains the first explicit deterministic data migration [BS89] algorithm achieving the best known competitive ratio for this problem. Using somewhat ...
Simpler and better approximation algorithms for network design
 In Proceedings of the 35th Annual ACM Symposium on Theory of Computing
, 2003
"... We give simple and easytoanalyze randomized approximation algorithms for several wellstudied NPhard network design problems. Our algorithms improve over the previously best known approximation ratios. Our main results are the following. We give a randomized 3.55approximation algorithm for the c ..."
Abstract

Cited by 79 (13 self)
 Add to MetaCart
We give simple and easytoanalyze randomized approximation algorithms for several wellstudied NPhard network design problems. Our algorithms improve over the previously best known approximation ratios. Our main results are the following. We give a randomized 3.55approximation algorithm for the connected facility location problem. The algorithm requires three lines to state, one page to analyze, and improves the bestknown performance guarantee for the problem. We give a 5.55approximation algorithm for virtual private network design. Previously, constantfactor approximation algorithms were known only for special cases of this problem. We give a simple constantfactor approximation algorithm for the singlesink buyatbulk network design problem. Our performance guarantee improves over what was previously known, and is an order of magnitude improvement over previous combinatorial approximation algorithms for the problem.
Distributed Paging for General Networks
, 1996
"... Distributed paging [BFR92, ABF93b, AK95] deals with the dynamic allocation of copies of files in a distributed network as to minimize the total communication cost over a sequence of read and write requests. Most previous work deals with the file allocation problem [BS89, West91, CLRW93, ABF93a, ..."
Abstract

Cited by 58 (5 self)
 Add to MetaCart
Distributed paging [BFR92, ABF93b, AK95] deals with the dynamic allocation of copies of files in a distributed network as to minimize the total communication cost over a sequence of read and write requests. Most previous work deals with the file allocation problem [BS89, West91, CLRW93, ABF93a, WY93, Koga93, AK94, LRWY94] where infinite nodal memory capacity is assumed. In contrast the distributed paging problem makes the more realistic assumption that nodal memory capacity is limited. Former work on distributed paging deals with the problem only in the case of a uniform network topology. This paper gives the first distributed paging algorithm for general networks. The algorithm is competitive in storage and communication. The competitive ratios are polylogarithmic in the total number of network nodes and the diameter of the network. Johns Hopkins University and Lab. for Computer Science, MIT. Supported by Air Force Contract TNDGAFOSR860078, ARO contract DAAL0386K0171, NSF contract 9114440CCR, DARPA contract N00014J 921799, and a special grant from IBM. EMail: baruch@theory.lcs.mit.edu. y Department of Computer Science, School of Mathematics, TelAviv University, TelAviv 69978, Israel. Supported by a grant from the Israeli Academy of Sciences. Email: yairb@math.tau.ac.il, fiat@math.tau.ac.il 0 1
Approximation via costsharing: a simple approximation algorithm for the multicommodity rentorbuy problem
 In IEEE Symposium on Foundations of Computer Science (FOCS
, 2003
"... We study the multicommodity rentorbuy problem, a type of network design problem with economies of scale. In this problem, capacity on an edge can be rented, with cost incurred on a perunit of capacity basis, or bought, which allows unlimited use after payment of a large fixed cost. Given a graph ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
We study the multicommodity rentorbuy problem, a type of network design problem with economies of scale. In this problem, capacity on an edge can be rented, with cost incurred on a perunit of capacity basis, or bought, which allows unlimited use after payment of a large fixed cost. Given a graph and a set of sourcesink pairs, we seek a minimumcost way of installing sufficient capacity on edges so that a prescribed amount of flow can be sent simultaneously from each source to the corresponding sink. The first constantfactor approximation algorithm for this problem was recently given by Kumar et al. (FOCS ’02); however, this algorithm and its analysis are both quite complicated, and its performance guarantee is extremely large. In this paper, we give a conceptually simple 12approximation algorithm for this problem. Our analysis of this algorithm makes crucial use of cost sharing, the task of allocating the cost of an object to many users of the object in a “fair ” manner. While techniques from approximation algorithms have recently yielded new progress on cost sharing problems, our work is the first to show the converse— that ideas from cost sharing can be fruitfully applied in the design and analysis of approximation algorithms. 1
Online Generalized Steiner Problem
, 1996
"... The Generalized Steiner Problem (GSP) is defined as follows. We are given a graph with nonnegative weights and a set of pairs of vertices. The algorithm has to construct minimum weight subgraph such that the two nodes of each pair are connected by a path. Offline generalized Steiner problem ap ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
The Generalized Steiner Problem (GSP) is defined as follows. We are given a graph with nonnegative weights and a set of pairs of vertices. The algorithm has to construct minimum weight subgraph such that the two nodes of each pair are connected by a path. Offline generalized Steiner problem approximation algorithms were given in [AKR91, GW92]. We consider the online generalized Steiner problem, in which pairs of vertices arrive online and are needed to be connected immediately. We give a simple O(log² n) competitive deterministic online algorithm. The previous best algorithm was O( p n log n) competitive [WY93]. We also consider the network connectivity leasing problem which is a generalization of the GSP. Here edges of the graph can be either bought or leased for different costs. We provide simple randomized O(log² n) competitive algorithm based on the online generalized Steiner problem result.
Fast Distributed Network Decompositions and Covers
 Journal of Parallel and Distributed Computing
, 1996
"... This paper presents deterministic sublineartime distributed algorithms for network decomposition and for constructing a sparse neighborhood cover of a network. The latter construction leads to improved distributed preprocessing time for a number of distributed algorithms, including allpairs shorte ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
This paper presents deterministic sublineartime distributed algorithms for network decomposition and for constructing a sparse neighborhood cover of a network. The latter construction leads to improved distributed preprocessing time for a number of distributed algorithms, including allpairs shortest paths computation, load balancing, broadcast, and bandwidth management. A preliminary version of this paper appeared in the Proceedings of the Eleventh Annual ACM Symposium on the Principles of Distributed Computing. y Lab. for Computer Science, MIT, Cambridge, MA 02139. Supported by Air Force Contract AFOSR F4962092J0125, NSF contract 9114440CCR, DARPA contracts N0001491J1698 and N00014J921799, and a special grant from IBM. z Dept. of Mathematics and Lab. for Computer Science, MIT. Supported in part by an NSF Postdoctoral Research Fellowship and an ONR grant provided to the Radcliffe Bunting Institute. x Dept. of Math Sciences, Johns Hopkins University, Baltimore, MD 21...
Dump: Competitive Distributed Paging
, 1993
"... This paper gives a randomized competitive distributed paging algorithm called Heat & Dump. The competitive ratio is logarithmic in the total storage capacity of the network, this is optimal to within a constant factor. This is in contrast to the linear optimal deterministic competitive ratio ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper gives a randomized competitive distributed paging algorithm called Heat & Dump. The competitive ratio is logarithmic in the total storage capacity of the network, this is optimal to within a constant factor. This is in contrast to the linear optimal deterministic competitive ratio [BFR92]. 1 Introduction The basic paradigm: Distributed Virtual Memory. Virtual addressing has the advantage that the physical address is separate from the logical address [KELS62]. Briefly, the name of a memory item is decoupled from its physical location in memory; moreover, physical location may dynamically change in the runtime. With the appearance of the massively parallel machines in the 1980s, it was natural to extend the virtual memory concept from the traditional uniprocessor to distributed sharedmemory environment. In other words, the programmer can use the convenient Parallel Random Access Machine (PRAM) abstraction to write the program, which will be then compiled automatically ...
Modular Competitiveness for Distributed Algorithms
 In Proc. 28th ACM Symp. on Theory of Computing (STOC
, 2000
"... We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We define a novel measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks that start at specified times. An important property of the throughput measure is that it is modular: we define a notion of relative competitiveness with the property that a krelatively competitive implementation of an object T using a subroutine U , combined with an lcompetitive implementation of U , gives a klcompetitive algorithm for ...
Compositional Competitiveness for Distributed Algorithms ∗
, 2004
"... We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks ..."
Abstract
 Add to MetaCart
We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al. [3], which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is kcompetitive relative to a class of subroutines, combined with an ℓcompetitive member of that class, gives a combined algorithm that is kℓcompetitive. In particular, we prove the throughputcompetitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of sharedmemory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction. An earlier version of this work appeared as “Modular competitiveness for distributed
BlindlyCompetitive Algorithms: Pricing & Bidding as a Case Study (Extended Abstract)
, 1995
"... The standard setting for competitive analysis of online algorithms assumes that online algorithm knows the past (but not future) inputs, and can optimize its performance by "learning" from mistakes of the past. This framework cannot capture some of the reallife online decisionmaking, whi ..."
Abstract
 Add to MetaCart
The standard setting for competitive analysis of online algorithms assumes that online algorithm knows the past (but not future) inputs, and can optimize its performance by "learning" from mistakes of the past. This framework cannot capture some of the reallife online decisionmaking, which takes place without full knowledge of past and present inputs. Instead, online algorithm only knows a function (or part) of its past decisions and real inputs (which we call the hidden input). A typical example is that of economic "warfare" involving, say, two companies, and a pool of (unknown) customers. In this work, we focus on problem of pricing an interdependent collection of resources, in the absence of knowledge about the following crucial information about the past and future inputs: ffl the customers financial benefit or prices offered by the competition, ffl the duration of the contracts, and ffl the future demand for the pro...