Results 1  10
of
249
Steps Towards a Science of Service Systems
 IEEE Computer
, 2007
"... The service sector – which includes government, education, medical and healthcare, banking and ..."
Abstract

Cited by 111 (5 self)
 Add to MetaCart
The service sector – which includes government, education, medical and healthcare, banking and
How Much Can Taxes Help Selfish Routing?
 EC'03
, 2003
"... ... in networks. We consider a model of selfish routing in which the latency experienced by network tra#c on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route tra#c on minimumlatency paths. The quality of a routing of tra#c is historically ..."
Abstract

Cited by 77 (5 self)
 Add to MetaCart
(Show Context)
... in networks. We consider a model of selfish routing in which the latency experienced by network tra#c on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route tra#c on minimumlatency paths. The quality of a routing of tra#c is historically measured by the sum of all travel times, also called the total latency. It is well known
NIRA: A New InterDomain Routing Architecture
 IEEE/ACM TRANSACTIONS ON NETWORKING
, 2007
"... In today’s Internet, users can choose their local Internet service providers (ISPs), but once their packets have entered the network, they have little control over the overall routes their packets take. Giving a user the ability to choose between providerlevel routes has the potential of fostering ..."
Abstract

Cited by 76 (2 self)
 Add to MetaCart
(Show Context)
In today’s Internet, users can choose their local Internet service providers (ISPs), but once their packets have entered the network, they have little control over the overall routes their packets take. Giving a user the ability to choose between providerlevel routes has the potential of fostering ISP competition to offer enhanced service and improving endtoend performance and reliability. This paper presents the design and evaluation of a new Internet routing architecture (NIRA) that gives a user the ability to choose the sequence of providers his packets take. NIRA addresses a broad range of issues, including practical provider compensation, scalable route discovery, efficient route representation, fast route failover, and security. NIRA supports user choice without running a global linkstate routing protocol. It breaks an endtoend route into a sender part and a receiver part and uses address assignment to represent each part. A user can specify a route with only a source and a destination address, and switch routes by switching addresses. We evaluate NIRA using a combination of network measurement, simulation, and analysis. Our evaluation shows that NIRA supports user choice with low overhead.
Convergence to Approximate Nash Equilibria in Congestion Games
 In SODA ’07
, 2007
"... ..."
(Show Context)
Optimal mechanism design and money burning
 STOC ’08
, 2008
"... Mechanism design is now a standard tool in computer science for aligning the incentives of selfinterested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising ..."
Abstract

Cited by 58 (15 self)
 Add to MetaCart
Mechanism design is now a standard tool in computer science for aligning the incentives of selfinterested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary transfers (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality—routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spamfighting systems). Service degradation is tantamount to requiring that users burn money, and such “payments ” can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of moneyburning mechanisms to maximize the residual surplus— the total value of the chosen outcome minus the payments required. Our primary contributions are the following. • We define a general template for priorfree optimal mechanism design that explicitly connects Bayesian optimal mechanism design, the dominant paradigm in economics, with worstcase analysis. In particular, we establish a general and principled way to identify appropriate performance benchmarks in priorfree mechanism design. • For general singleparameter agent settings, we char
A New Model for Selfish Routing
 Proceedings of the 21st International Symposium on Theoretical Aspects of Computer Science (STACS’04), LNCS 2996
, 2004
"... Abstract. In this work, we introduce and study a new model for selfish routing over noncooperative networks that combines features from the two such best studied models, namely the KP model and the Wardrop model in an interesting way. We consider a set of n users, each using a mixed strategy to shi ..."
Abstract

Cited by 54 (9 self)
 Add to MetaCart
(Show Context)
Abstract. In this work, we introduce and study a new model for selfish routing over noncooperative networks that combines features from the two such best studied models, namely the KP model and the Wardrop model in an interesting way. We consider a set of n users, each using a mixed strategy to ship its unsplittable traffic over a network consisting of m parallel links. In a Nash equilibrium, no user can increase its Individual Cost by unilaterally deviating from its strategy. To evaluate the performance of such Nash equilibria, we introduce Quadratic Social Cost as a certain sum of Individual Costs – namely, the sum of the expectations of the squares of the incurred link latencies. This definition is unlike the KP model, where Maximum Social Cost has been defined as the maximum of Individual Costs. We analyse the impact of our modeling assumptions on the computation of Quadratic Social Cost, on the structure of worstcase Nash equilibria, and on bounds on the Quadratic Coordination Ratio.
Exact Price of Anarchy for Polynomial Congestion Games
, 2006
"... We show exact values for the price of anarchy of weighted and unweighted congestion games with polynomial latency functions. The given values also hold for weighted and unweighted network congestion games. ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
We show exact values for the price of anarchy of weighted and unweighted congestion games with polynomial latency functions. The given values also hold for weighted and unweighted network congestion games.
Congestion Games with Malicious Players
, 2008
"... We study the equilibria of nonatomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equ ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
We study the equilibria of nonatomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equilibria for these games, and we seek to quantify the impact of the malicious players on the equilibrium. One counterintuitive phenomenon which we demonstrate is the “windfall of malice”: paradoxically, when a myopically malicious player gains control of a fraction of the flow, the new equilibrium may be more favorable for the remaining rational players than the previous equilibrium.
Revisiting LogLinear Learning: Asynchrony, Completeness and PayoffBased Implementation
, 2008
"... Loglinear learning is a learning algorithm with equilibrium selection properties. Loglinear learning provides guarantees on the percentage of time that the joint action profile will be at a potential maximizer in potential games. The traditional analysis of loglinear learning has centered around ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
Loglinear learning is a learning algorithm with equilibrium selection properties. Loglinear learning provides guarantees on the percentage of time that the joint action profile will be at a potential maximizer in potential games. The traditional analysis of loglinear learning has centered around explicitly computing the stationary distribution. This analysis relied on a highly structured setting: i) players ’ utility functions constitute a potential game, ii) players update their strategies one at a time, which we refer to as asynchrony, iii) at any stage, a player can select any action in the action set, which we refer to as completeness, and iv) each player is endowed with the ability to assess the utility he would have received for any alternative action provided that the actions of all other players remain fixed. Since the appeal of loglinear learning is not solely the explicit form of the stationary distribution, we seek to address to what degree one can relax the structural assumptions while maintaining that only potential function maximizers are the stochastically stable action profiles. In this paper, we introduce slight variants of loglinear learning to include both synchronous updates and incomplete action sets. In both settings, we prove that only potential function maximizers are stochastically stable. Furthermore, we introduce a payoffbased version of loglinear learning, in which players are only aware of the utility they received and the action that they played. Note that loglinear learning in its original form is not a payoffbased learning algorithm. In payoffbased loglinear learning, we also prove that only potential maximizers are stochastically stable. The key enabler for these results is to change the focus of the analysis away from deriving the explicit form of the stationary distribution of the learning process towards characterizing the stochastically stable states. The resulting analysis uses the theory of resistance trees for regular perturbed Markov decision processes, thereby allowing a relaxation of the aforementioned structural assumptions.
Distributed selfish load balancing
, 2006
"... Suppose that a set of m tasks are to be shared as equally as possible amongst a set of n resources. A gametheoretic mechanism to find a suitable allocation is to associate each task with a “selfish agent”, and require each agent to select a resource, with the cost of a resource being the number of ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
(Show Context)
Suppose that a set of m tasks are to be shared as equally as possible amongst a set of n resources. A gametheoretic mechanism to find a suitable allocation is to associate each task with a “selfish agent”, and require each agent to select a resource, with the cost of a resource being the number of agents to select it. Agents would then be expected to migrate from overloaded to underloaded resources, until the allocation becomes balanced. Recent work has studied the question of how this can take place within a distributed setting in which agents migrate selfishly without any centralized control. In this paper we discuss a natural protocol for the agents which combines the following desirable features: It can be implemented in a strongly distributed setting, uses no central control, and has good convergence properties. For m ≫ n, the system becomes approximately balanced (an ǫNash equilibrium) in expected time O(log log m). We show using a martingale technique that the process converges to a perfectly balanced allocation in expected time O(log log m + n 4). We also give a lower bound of Ω(max{loglog m, n}) for the convergence time.