Results 1 - 10
of
662
Worst-case equilibria
- IN PROCEEDINGS OF THE 16TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
, 1999
"... In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a ver ..."
Abstract
-
Cited by 847 (17 self)
- Add to MetaCart
In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems.
How bad is selfish routing?
- JOURNAL OF THE ACM
, 2002
"... We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route t ..."
Abstract
-
Cited by 657 (27 self)
- Add to MetaCart
We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times—the total latency—is minimized. In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a “selfishly motivated ” assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total
Sprite: A Simple, Cheat-Proof, Credit-Based System for Mobile Ad-Hoc Networks
- in Proceedings of IEEE INFOCOM
, 2002
"... Mobile ad hoc networking has been an active research area for several years. How to stimulate cooperation among selfish mobile nodes, however, is not well addressed yet. In this paper, we propose Sprite, a simple, cheat-proof, creditbased system for stimulating cooperation among selfish nodes in mob ..."
Abstract
-
Cited by 484 (17 self)
- Add to MetaCart
Mobile ad hoc networking has been an active research area for several years. How to stimulate cooperation among selfish mobile nodes, however, is not well addressed yet. In this paper, we propose Sprite, a simple, cheat-proof, creditbased system for stimulating cooperation among selfish nodes in mobile ad hoc networks. Our system provides incentive for mobile nodes to cooperate and report actions honestly. Compared with previous approaches, our system does not require any tamperproof hardware at any node. Furthermore, we present a formal model of our system and prove its properties. Evaluations of a prototype implementation show that the overhead of our system is small. Simulations and analysis show that mobile nodes can cooperate and forward each other's messages, unless the resource of each node is extremely low.
Tussle in cyberspace: Defining tomorrow’s Internet
- In Proc. ACM SIGCOMM
, 2002
"... Abstract—The architecture of the Internet is based on a number of principles, including the self-describing datagram packet, the end-to-end arguments, diversity in technology and global addressing. As the Internet has moved from a research curiosity to a recognized component of mainstream society, n ..."
Abstract
-
Cited by 307 (10 self)
- Add to MetaCart
(Show Context)
Abstract—The architecture of the Internet is based on a number of principles, including the self-describing datagram packet, the end-to-end arguments, diversity in technology and global addressing. As the Internet has moved from a research curiosity to a recognized component of mainstream society, new requirements have emerged that suggest new design principles, and perhaps suggest that we revisit some old ones. This paper explores one important reality that surrounds the Internet today: different stakeholders that are part of the Internet milieu have interests that may be adverse to each other, and these parties each vie to favor their particular interests. We call this process “the tussle.” Our position is that accommodating this tussle is crucial to the evolution of the network’s technical architecture. We discuss some examples of tussle, and offer some technical design principles that take it into account. Index Terms—Competition, design principles, economics, network architecture, trust, tussle. I.
Sharing the Cost of Multicast Transmissions
, 2001
"... We investigate cost-sharing algorithms for multicast transmission. Economic considerations point to two distinct mechanisms, marginal cost and Shapley value, as the two solutions most appropriate in this context. We prove that the former has a natural algorithm that uses only two messages per link o ..."
Abstract
-
Cited by 284 (16 self)
- Add to MetaCart
We investigate cost-sharing algorithms for multicast transmission. Economic considerations point to two distinct mechanisms, marginal cost and Shapley value, as the two solutions most appropriate in this context. We prove that the former has a natural algorithm that uses only two messages per link of the multicast tree, while we give evidence that the latter requires a quadratic total number of messages. We also show that the welfare value achieved by an optimal multicast tree is NP-hard to approximate within any constant factor, even for bounded-degree networks. The lower-bound proof for the Shapley value uses a novel algebraic technique for bounding from below the number of messages exchanged in a distributed computation; this technique may prove useful in other contexts as well.
Distributed Algorithmic Mechanism Design: Recent Results and Future Directions
, 2002
"... Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science’s traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet’s decentralized nature, in which distributed computation and autono ..."
Abstract
-
Cited by 283 (24 self)
- Add to MetaCart
(Show Context)
Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science’s traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet’s decentralized nature, in which distributed computation and autonomous agents prevail, makes DAMD a very natural approach for many Internet problems. This paper first outlines the basics of DAMD and then reviews previous DAMD results on multicast cost sharing and interdomain routing. The remainder of the paper describes several promising research directions and poses some specific open problems.
The price of stability for network design with fair cost allocation
- In Proceedings of the 45th Annual Symposium on Foundations of Computer Science (FOCS
, 2004
"... Abstract. Network design is a fundamental problem for which it is important to understand the effects of strategic behavior. Given a collection of self-interested agents who want to form a network connecting certain endpoints, the set of stable solutions — the Nash equilibria — may look quite differ ..."
Abstract
-
Cited by 281 (30 self)
- Add to MetaCart
(Show Context)
Abstract. Network design is a fundamental problem for which it is important to understand the effects of strategic behavior. Given a collection of self-interested agents who want to form a network connecting certain endpoints, the set of stable solutions — the Nash equilibria — may look quite different from the centrally enforced optimum. We study the quality of the best Nash equilibrium, and refer to the ratio of its cost to the optimum network cost as the price of stability. The best Nash equilibrium solution has a natural meaning of stability in this context — it is the optimal solution that can be proposed from which no user will defect. We consider the price of stability for network design with respect to one of the most widely-studied protocols for network cost allocation, in which the cost of each edge is divided equally between users whose connections make use of it; this fair-division scheme can be derived from the Shapley value, and has a number of basic economic motivations. We show that the price of stability for network design with respect to this fair cost allocation is O(log k), where k is the number of users, and that a good Nash equilibrium can be achieved via best-response dynamics in which users iteratively defect from a starting solution. This establishes that the fair cost allocation protocol is in fact a useful mechanism for inducing strategic behavior to form near-optimal equilibria. We discuss connections to the class of potential games defined by Monderer and Shapley, and extend our results to cases in which users are seeking to balance network design costs with latencies in the constructed network, with stronger results when the network has only delays and no construction costs. We also present bounds on the convergence time of best-response dynamics, and discuss extensions to a weighted game.
Bidding and Allocation in Combinatorial Auctions
- IN ACM CONFERENCE ON ELECTRONIC COMMERCE
, 2000
"... When an auction of multiple items is performed, it is often desirable to allow bids on combinations of items, as opposed to only on single items. Such an auction is often called "combinatorial", and the exponential number of possible combinations results in computational intractability o ..."
Abstract
-
Cited by 275 (11 self)
- Add to MetaCart
(Show Context)
When an auction of multiple items is performed, it is often desirable to allow bids on combinations of items, as opposed to only on single items. Such an auction is often called "combinatorial", and the exponential number of possible combinations results in computational intractability of many aspects regarding such an auction. This paper considers two of these aspects: the bidding language and the allocation algorithm. First we consider which kinds of bids on combinations are allowed and how, i.e. in what language, they are specified. The basic tradeoff is the expressibility of the language versus its simplicity. We consider and formalize several bidding languages and compare their strengths. We prove exponential separations between the expressive power of different languages, and show that one language, "OR-bids with phantom items", can polynomially simulate the others. We then consider the problem of determining the best allocation -- a problem known to be computationally intractable. We suggest an approach based on Linear Programming (LP) and motivate it. We prove that the LP approach finds an optimal allocation if and only if prices can be attached to single items in the auction. We pinpoint several classes of auctions where this is the case, and suggest greedy and branch-and-bound heuristics based on LP for other cases.
A BGP-based Mechanism for Lowest-Cost Routing
, 2002
"... The routing of traffic between... this paper, we address the problem of interdomain routing from a mechanism-design point of view. The application of mechanism-design principles to the study of routing is the subject of earlier work by Nisan and Ronen [15] and Hershberger and Suri [11]. In this pape ..."
Abstract
-
Cited by 268 (16 self)
- Add to MetaCart
(Show Context)
The routing of traffic between... this paper, we address the problem of interdomain routing from a mechanism-design point of view. The application of mechanism-design principles to the study of routing is the subject of earlier work by Nisan and Ronen [15] and Hershberger and Suri [11]. In this paper, we formulate and solve a version of the routing-mechanism design problem that is different from the previously studied version in three ways that make it more accurately reflective of real-world interdomain routing: (1) we treat the nodes as strategic agents, rather than the links; (2) our mechanism computes lowest-cost routes for all source-destination pairs and payments for transit nodes on all of the routes (rather than computing routes and payments for only one source-destination pair at a time, as is done in [15,11]); (3) we show how to compute our mechanism with a distributed algorithm that is a straightforward extension to BGP and causes only modest increases in routingtable size and convergence time (in contrast with the centralized algorithms used in [15,11]). This approach of using an existing protocol as a substrate for distributed computation may prove useful in future development of Internet algorithms generally, not only for routing or pricing problems. Our design and analysis of a strategyproof, BGP-based routing mechanism provides a new, promising direction in distributed algorithmic mechanism design, which has heretofore been focused mainly on multicast cost sharing.