Results 1 - 10
of
255
Steps Towards a Science of Service Systems
- IEEE Computer
, 2007
"... The service sector – which includes government, education, medical and healthcare, banking and ..."
Abstract
-
Cited by 128 (5 self)
- Add to MetaCart
The service sector – which includes government, education, medical and healthcare, banking and
NIRA: A New Inter-Domain Routing Architecture
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 2007
"... In today’s Internet, users can choose their local Internet service providers (ISPs), but once their packets have entered the network, they have little control over the overall routes their packets take. Giving a user the ability to choose between provider-level routes has the potential of fostering ..."
Abstract
-
Cited by 77 (2 self)
- Add to MetaCart
(Show Context)
In today’s Internet, users can choose their local Internet service providers (ISPs), but once their packets have entered the network, they have little control over the overall routes their packets take. Giving a user the ability to choose between provider-level routes has the potential of fostering ISP competition to offer enhanced service and improving end-to-end performance and reliability. This paper presents the design and evaluation of a new Internet routing architecture (NIRA) that gives a user the ability to choose the sequence of providers his packets take. NIRA addresses a broad range of issues, including practical provider compensation, scalable route discovery, efficient route representation, fast route fail-over, and security. NIRA supports user choice without running a global link-state routing protocol. It breaks an end-to-end route into a sender part and a receiver part and uses address assignment to represent each part. A user can specify a route with only a source and a destination address, and switch routes by switching addresses. We evaluate NIRA using a combination of network measurement, simulation, and analysis. Our evaluation shows that NIRA supports user choice with low overhead.
How Much Can Taxes Help Selfish Routing?
- EC'03
, 2003
"... ... in networks. We consider a model of selfish routing in which the latency experienced by network tra#c on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route tra#c on minimum-latency paths. The quality of a routing of tra#c is historically ..."
Abstract
-
Cited by 76 (6 self)
- Add to MetaCart
(Show Context)
... in networks. We consider a model of selfish routing in which the latency experienced by network tra#c on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route tra#c on minimum-latency paths. The quality of a routing of tra#c is historically measured by the sum of all travel times, also called the total latency. It is well known
Convergence to Approximate Nash Equilibria in Congestion Games
- In SODA ’07
, 2007
"... ..."
(Show Context)
Optimal mechanism design and money burning
- STOC ’08
, 2008
"... Mechanism design is now a standard tool in computer science for aligning the incentives of self-interested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising ..."
Abstract
-
Cited by 57 (15 self)
- Add to MetaCart
Mechanism design is now a standard tool in computer science for aligning the incentives of self-interested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary transfers (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality—routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spam-fighting systems). Service degradation is tantamount to requiring that users burn money, and such “payments ” can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of money-burning mechanisms to maximize the residual surplus— the total value of the chosen outcome minus the payments required. Our primary contributions are the following. • We define a general template for prior-free optimal mechanism design that explicitly connects Bayesian optimal mechanism design, the dominant paradigm in economics, with worst-case analysis. In particular, we establish a general and principled way to identify appropriate performance benchmarks in prior-free mechanism design. • For general single-parameter agent settings, we char-
A New Model for Selfish Routing
- Proceedings of the 21st International Symposium on Theoretical Aspects of Computer Science (STACS’04), LNCS 2996
, 2004
"... Abstract. In this work, we introduce and study a new model for selfish routing over non-cooperative networks that combines features from the two such best studied models, namely the KP model and the Wardrop model in an interesting way. We consider a set of n users, each using a mixed strategy to shi ..."
Abstract
-
Cited by 53 (8 self)
- Add to MetaCart
(Show Context)
Abstract. In this work, we introduce and study a new model for selfish routing over non-cooperative networks that combines features from the two such best studied models, namely the KP model and the Wardrop model in an interesting way. We consider a set of n users, each using a mixed strategy to ship its unsplittable traffic over a network consisting of m parallel links. In a Nash equilibrium, no user can increase its Individual Cost by unilaterally deviating from its strategy. To evaluate the performance of such Nash equilibria, we introduce Quadratic Social Cost as a certain sum of In-dividual Costs – namely, the sum of the expectations of the squares of the incurred link latencies. This definition is unlike the KP model, where Maximum Social Cost has been defined as the maximum of Individual Costs. We analyse the impact of our modeling assumptions on the computation of Quadratic Social Cost, on the structure of worst-case Nash equilibria, and on bounds on the Quadratic Coordination Ratio.
Exact Price of Anarchy for Polynomial Congestion Games
, 2006
"... We show exact values for the price of anarchy of weighted and unweighted congestion games with polynomial latency functions. The given values also hold for weighted and unweighted network congestion games. ..."
Abstract
-
Cited by 49 (9 self)
- Add to MetaCart
We show exact values for the price of anarchy of weighted and unweighted congestion games with polynomial latency functions. The given values also hold for weighted and unweighted network congestion games.
Congestion Games with Malicious Players
, 2008
"... We study the equilibria of non-atomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equ ..."
Abstract
-
Cited by 45 (0 self)
- Add to MetaCart
We study the equilibria of non-atomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equilibria for these games, and we seek to quantify the impact of the malicious players on the equilibrium. One counterintuitive phenomenon which we demonstrate is the “windfall of malice”: paradoxically, when a myopically malicious player gains control of a fraction of the flow, the new equilibrium may be more favorable for the remaining rational players than the previous equilibrium.
Revisiting Log-Linear Learning: Asynchrony, Completeness and Payoff-Based Implementation
, 2008
"... Log-linear learning is a learning algorithm with equilibrium selection properties. Log-linear learning provides guarantees on the percentage of time that the joint action profile will be at a potential maximizer in potential games. The traditional analysis of log-linear learning has centered around ..."
Abstract
-
Cited by 42 (11 self)
- Add to MetaCart
Log-linear learning is a learning algorithm with equilibrium selection properties. Log-linear learning provides guarantees on the percentage of time that the joint action profile will be at a potential maximizer in potential games. The traditional analysis of log-linear learning has centered around explicitly computing the stationary distribution. This analysis relied on a highly structured setting: i) players ’ utility functions constitute a potential game, ii) players update their strategies one at a time, which we refer to as asynchrony, iii) at any stage, a player can select any action in the action set, which we refer to as completeness, and iv) each player is endowed with the ability to assess the utility he would have received for any alternative action provided that the actions of all other players remain fixed. Since the appeal of log-linear learning is not solely the explicit form of the stationary distribution, we seek to address to what degree one can relax the structural assumptions while maintaining that only potential function maximizers are the stochastically stable action profiles. In this paper, we introduce slight variants of log-linear learning to include both synchronous updates and incomplete action sets. In both settings, we prove that only potential function maximizers are stochastically stable. Furthermore, we introduce a payoff-based version of log-linear learning, in which players are only aware of the utility they received and the action that they played. Note that log-linear learning in its original form is not a payoff-based learning algorithm. In payoff-based log-linear learning, we also prove that only potential maximizers are stochastically stable. The key enabler for these results is to change the focus of the analysis away from deriving the explicit form of the stationary distribution of the learning process towards characterizing the stochastically stable states. The resulting analysis uses the theory of resistance trees for regular perturbed Markov decision processes, thereby allowing a relaxation of the aforementioned structural assumptions.
The Effectiveness of Stackelberg Strategies and Tolls for Network Congestion Games
- In Proc. Symposium on Discrete Algorithms (SODA
, 2007
"... Abstract It is well known that in a network with arbitrary(convex) latency functions that are a function of edge ..."
Abstract
-
Cited by 41 (1 self)
- Add to MetaCart
(Show Context)
Abstract It is well known that in a network with arbitrary(convex) latency functions that are a function of edge