Results 1  10
of
12
Algorithmic mechanism design
 Games and Economic Behavior
, 1999
"... We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own selfinterest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agen ..."
Abstract

Cited by 561 (17 self)
 Add to MetaCart
We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own selfinterest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agents ’ interests are best served by behaving correctly. Following notions from the field of mechanism design, we suggest a framework for studying such algorithms. Our main technical contribution concerns the study of a representative task scheduling problem for which the standard mechanism design tools do not suffice. Journal of Economic Literature
Workcompetitive scheduling for cooperative computing with dynamic groups
 SIAM JOURNAL ON COMPUTING
, 2005
"... The problem of cooperatively performing a set of t tasks in a decentralized computing environment subject to failures is one of the fundamental problems in distributed computing. The setting with partitionable networks is especially challenging, as algorithmic solutions must accommodate the possib ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
The problem of cooperatively performing a set of t tasks in a decentralized computing environment subject to failures is one of the fundamental problems in distributed computing. The setting with partitionable networks is especially challenging, as algorithmic solutions must accommodate the possibility that groups of processors become disconnected (and, perhaps, reconnected) during the computation. The efficiency of taskperforming algorithms is often assessed in terms of work: the total number of tasks, counting multiplicities, performed by all of the processors during the computation. In general, the scenario where the processors are partitioned into g disconnected components causes any taskperforming algorithm to have work Ω(t · g) even if each group of processors performs no more than the optimal number of Θ(t) tasks. Given that such pessimistic lower bounds apply to any scheduling algorithm, we pursue a competitive analysis. Specifically, this paper studies a simple randomized scheduling algorithm for p asynchronous processors, connected by a dynamically changing communication medium, to complete t known tasks. The performance of this algorithm is compared against that of an omniscient offline algorithm with full knowledge of the future changes in the communication medium. The paper describes a notion of computation width, which associates a natural number with a history of changes in the communication medium, and shows both upper and lower bounds on workcompetitiveness in terms of this quantity. Specifically, it is shown that the simple randomized algorithm obtains the competitive ratio (1 + cw/e), where cw is the computation width and e is the base of the natural logarithm (e =2.7182...); this competitive ratio is then shown to be tight.
Distributed Cooperation during the Absence of Communication
, 2001
"... This paper presents a study of a distributed cooperation problem under the assumption that processors may not be able to communicate for a prolonged time. The problem for n processors is defined in terms of t tasks that need to be performed e#ciently and that are known to all processors. The resul ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
This paper presents a study of a distributed cooperation problem under the assumption that processors may not be able to communicate for a prolonged time. The problem for n processors is defined in terms of t tasks that need to be performed e#ciently and that are known to all processors. The results of this study characterize the ability of the processors to schedule their work so that when some processors establish communication, the wasted (redundant) work these processors have collectively performed prior to that time is controlled. The lower bound for wasted work presented here shows that for any set of schedules there are two processors such that when they complete t1 and t2 tasks respectively the number of redundant tasks is #(t1 t2 /t). For n = t and for schedules longer than # n,thenumberof redundant tasks for two or more processors must be at least 2. The upper bound on pairwise waste for schedules of length # n is shown to be 1. Our e#cient deterministic schedule construction is motivated by design theory. To obtain linear length schedules, a novel deterministic and e#cient construction is given. This construction has the property that pairwise wasted work increases gracefully as processors progress through their schedules. Finally our analysis of a random scheduling solution shows that with high probability pairwise waste is well behaved at all times: specifically, two processors having completed t1 and t2 tasks, respectively, are guaranteed to have no more than t1 t2 /t + # redundant tasks, where #=O(log n + t1 t2 /t # log n).
Approximating maxmin linear programs with local algorithms
 In Proc. 22nd IEEE International Parallel and Distributed Processing Symposium (IPDPS
, 2008
"... Abstract. A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constantsize neighbourhood of the node. We study the applicability of local algorithms to maxmin LPs where the objective is to maximise ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
Abstract. A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constantsize neighbourhood of the node. We study the applicability of local algorithms to maxmin LPs where the objective is to maximise mink v ckvxv subject to ∑ v aivxv ≤ 1 for each i and xv ≥ 0 for each v. Here ckv ≥ 0, aiv ≥ 0, and the support sets Vi = {v: aiv> 0}, Vk = {v: ckv> 0}, Iv = {i: aiv> 0} and Kv = {k: ckv> 0} have bounded size. In the distributed setting, each agent v is responsible for choosing the value of xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if Vi  and Vk  are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H. 1.
Optimal Scheduling for Disconnected Cooperation
, 2001
"... We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before proc ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before processors begin communication they execute tasks in the order given by their schedules. Our goal is to schedule work of isolated processors so that when communication is established for the rst time, the number of redundantly executed tasks is controlled. We quantify worst case redundancy as a function of processor advancements through their schedules. In this work we rene and simplify an extant deterministic construction for schedules with n t, and we develop a new analysis of its waste. The new analysis shows that for any pair of schedules, the number of redundant tasks can be controlled for the entire range of t tasks. Our new result is asymptotically optimal: the tails of these schedules are within a 1 +O(n 1 4 ) factor of the lower bound. We also present two new deterministic constructions one for t n, and the other for t n 3=2 , which substantially improve pairwise waste for all prexes of length t= p n, and oer near optimal waste for the tails of the schedules. Finally, we present bounds for waste of any collection of k 2 processors for both deterministic and randomized constructions. 1
A.: Distributed Computation Meets Design Theory: Local Scheduling for Disconnected Cooperation
 Bulletin of the EATCS
, 2004
"... y ..."
Target Shooting with Programmed Random Variables
 Annals of Applied Probability
, 1995
"... . Let X1 ; : : : ; Xn be pairwise independent random variables of known (but not necessarily identical) distribution; we wish to select a subset of these whose sum will be as close as possible to some known target value T . Conditions described below force the selections to be made by a primitive d ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. Let X1 ; : : : ; Xn be pairwise independent random variables of known (but not necessarily identical) distribution; we wish to select a subset of these whose sum will be as close as possible to some known target value T . Conditions described below force the selections to be made by a primitive distributed system (similar to one considered by Papadimitriou and Yannakakis [2] in PODC '91); here we are able to obtain a surprising amount of information about optimal solutions. The conditions are that each variable must be "programmed " in advance, joining the selected set according to its own value. Thus, for example, one variable might be programmed to join just if its value lies between ff and fi, while another is told to join regardless of its value. Our object is to find a strategy, that is, a collection of programs, which minimizes the mean square error in approximating T . Typical applications involve producing a steady flow of some commodity when supply is controlled at a mult...
Selfish load balancing under partial knowledge
 In Proceedings of the 32nd International Symposium on Mathematical Foundations of Computer Science (MFCS
, 2007
"... Abstract. We consider n selfish agents or players, each having a load, who want to place their loads to one of two bins. The agents have an incomplete picture of the world: They know some loads exactly and only a probability distribution for the rest. We study Nash equilibria for this model, we comp ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We consider n selfish agents or players, each having a load, who want to place their loads to one of two bins. The agents have an incomplete picture of the world: They know some loads exactly and only a probability distribution for the rest. We study Nash equilibria for this model, we compute the Price of Anarchy for some cases and show that sometimes extra information adversely affects the Divergence Ratio (a kind of subjective Price of Anarchy). 1