Results 1  10
of
50
Convergence to Approximate Nash Equilibria in Congestion Games
 In SODA ’07
, 2007
"... ..."
(Show Context)
Joint Strategy Fictitious Play with Inertia for Potential Games
, 2005
"... We consider finite multiplayer repeated games involving a large number of players with large strategy spaces and enmeshed utility structures. In these “largescale” games, players are inherently faced with limitations in both their observational and computational capabilities. Accordingly, players ..."
Abstract

Cited by 59 (25 self)
 Add to MetaCart
(Show Context)
We consider finite multiplayer repeated games involving a large number of players with large strategy spaces and enmeshed utility structures. In these “largescale” games, players are inherently faced with limitations in both their observational and computational capabilities. Accordingly, players in largescale games need to make their decisions using algorithms that accommodate limitations in information gathering and processing. A motivating example is a congestion game in a complex transportation system, in which a large number of vehicles make daily routing decisions to optimize their own objectives in response to their observations. In this setting, observing and responding to the individual actions of all vehicles on a daily basis would be a formidable task for any individual driver. This disqualifies some of the well known decision making models such as “Fictitious Play” (FP) as
Routing without regret: On convergence to nash equilibria of regretminimizing algorithms in routing games
 In PODC
, 2006
"... Abstract There has been substantial work developing simple, efficient noregret algorithms for a wideclass of repeated decisionmaking problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversariallychanging envi ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
(Show Context)
Abstract There has been substantial work developing simple, efficient noregret algorithms for a wideclass of repeated decisionmaking problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversariallychanging environments. There has also been substantial work on analyzing properties of Nash equilibria in routing games. In this paper, we consider the question: if each player in a routing game uses a noregret strategy, will behavior converge to a Nash equilibrium? In general games the answer to this question is known to be no in a strong sense, but routing games havesubstantially more structure. In this paper we show that in the Wardrop setting of multicommodity flow and infinitesimalagents, behavior will approach Nash equilibrium (formally, on most days, the cost of the flow will be close to the cost of the cheapest paths possible given that flow) at a rate that dependspolynomially on the players ' regret bounds and the maximum slope of any latency function. We also show that priceofanarchy results may be applied to these approximate equilibria, and alsoconsider the finitesize (noninfinitesimal) loadbalancing model of Azar [2].
Congestion Games with Malicious Players
, 2008
"... We study the equilibria of nonatomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equ ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
We study the equilibria of nonatomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equilibria for these games, and we seek to quantify the impact of the malicious players on the equilibrium. One counterintuitive phenomenon which we demonstrate is the “windfall of malice”: paradoxically, when a myopically malicious player gains control of a fraction of the flow, the new equilibrium may be more favorable for the remaining rational players than the previous equilibrium.
Distributed selfish load balancing
, 2006
"... Suppose that a set of m tasks are to be shared as equally as possible amongst a set of n resources. A gametheoretic mechanism to find a suitable allocation is to associate each task with a “selfish agent”, and require each agent to select a resource, with the cost of a resource being the number of ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
(Show Context)
Suppose that a set of m tasks are to be shared as equally as possible amongst a set of n resources. A gametheoretic mechanism to find a suitable allocation is to associate each task with a “selfish agent”, and require each agent to select a resource, with the cost of a resource being the number of agents to select it. Agents would then be expected to migrate from overloaded to underloaded resources, until the allocation becomes balanced. Recent work has studied the question of how this can take place within a distributed setting in which agents migrate selfishly without any centralized control. In this paper we discuss a natural protocol for the agents which combines the following desirable features: It can be implemented in a strongly distributed setting, uses no central control, and has good convergence properties. For m ≫ n, the system becomes approximately balanced (an ǫNash equilibrium) in expected time O(log log m). We show using a martingale technique that the process converges to a perfectly balanced allocation in expected time O(log log m + n 4). We also give a lower bound of Ω(max{loglog m, n}) for the convergence time.
REPLEX — Dynamic traffic engineering based on Wardrop routing policies
 In Proc. 2nd Conference on Future Networking Technologies (CoNext
, 2006
"... One major challenge in communication networks is the problem of dynamically distributing load in the presence of bursty and hard to predict changes in traffic demands. Current traffic engineering operates on time scales of several hours which is too slow to react to phenomena like flash crowds or BG ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
(Show Context)
One major challenge in communication networks is the problem of dynamically distributing load in the presence of bursty and hard to predict changes in traffic demands. Current traffic engineering operates on time scales of several hours which is too slow to react to phenomena like flash crowds or BGP reroutes. One possible solution is to use load sensitive routing. Yet, interacting routing decisions at short time scales can lead to oscillations, which has prevented load sensitive routing from being deployed since the early experiences in Arpanet. However, recent theoretical results have devised a game theoretical rerouting policy that provably avoids such oscillation and in addition can be shown to converge quickly. In this paper we present REPLEX, a distributed dynamic traffic engineering algorithm based on this policy. Exploiting the fact that most underlying routing protocols support multiple equalcost routes to a destination, it dynamically changes the proportion of traffic that is routed along each path. These proportions are carefully adapted utilising information from periodic measurements and, optionally, information exchanged between the routers about the traffic condition along the path. We evaluate the algorithm via simulations employing traffic loads that mimic actual Web traffic, i. e., bursty TCP traffic, and whose characteristics are consistent with selfsimilarity. The simulations quickly converge and do not exhibit significant oscillations on both artificial as well as real topologies, as can be expected from the theoretical results.
Multiplicative Updates Outperform Generic NoRegret . . .
, 2009
"... We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the wellknown multiplicativeweights learning algorithm results in convergence to ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the wellknown multiplicativeweights learning algorithm results in convergence to pure equilibria. Our results show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the loadbalancing game introduced by Koutsoupias and Papadimitriou, which has superconstant price of anarchy and has correlated equilibria that are exponentially worse than any mixed Nash equilibrium. Our results identify a set of mixed Nash equilibria that we call weakly stable equilibria. Our notion of weakly stable is defined gametheoretically, but we show that this property holds whenever a stability criterion from the theory of dynamical systems is satisfied. This allows us to show that in every congestion game, the distribution of play converges to the set of weakly stable equilibria. Pure Nash equilibria are weakly stable, and we show using techniques from algebraic geometry that the converse is true with probability 1 when congestion costs are selected at random independently on each edge (from any monotonically parametrized distribution). We further extend our results to show that players can use algorithms with different (sufficiently small) learning rates, i.e. they can trade off convergence speed and long term average regret differently.
Enabling Contentaware Traffic Engineering
"... Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map endusers to appropriate ser ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
(Show Context)
Today, a large fraction of Internet traffic is originated by Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map endusers to appropriate servers without being fully aware of the network conditions within an Internet Service Provider (ISP) or the enduser location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs. The challenges that CDNs and ISPs face separately can be turned into an opportunity for collaboration. We argue that it is sufficient for CDNs and ISPs to coordinate only in server selection, not routing, in order to perform traffic engineering. To this end, we propose Contentaware Traffic Engineering (CaTE), which dynamically adapts server selection for content hosted by CDNs using ISP recommendations on small time scales. CaTE relies on the observation that by selecting an appropriate server among those available to deliver the content, the path of the traffic in the network can be influenced in a desired way. We present the design and implementation of a prototype to realize CaTE, and show how CDNs and ISPs can jointly take advantage of the already deployed distributed hosting infrastructures and path diversity, as well as the ISP detailed view of the network status without revealing sensitive operational information. By relying on tier1 ISP traces, we show that CaTE allows CDNs to enhance the enduser experience while enabling an ISP to achieve several traffic engineering goals.
Fastconverging tatonnement algorithms for onetime and ongoing market problems
 In Symposium on Theory of Computing (STOC 2008
, 2008
"... Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, this paper formalizes the setting of Ongoing Markets, by contrast with the classic market scenario, which we term OneTime Markets. The Ongoing Market allows ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, this paper formalizes the setting of Ongoing Markets, by contrast with the classic market scenario, which we term OneTime Markets. The Ongoing Market allows trade at nonequilibrium prices, and, as its name suggests, continues over time. As such, it appears to be a more plausible model of actual markets. For both market settings, this paper defines and analyzes variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis in three significant respects: the price update for a good depends only on the price, demand, and supply for that good, and on no other information; the price update for each good occurs distributively and asynchronously; the algorithms work (and the analyses hold) from an arbitrary starting point. Our algorithm introduces a new and natural update rule. We show that this update rule leads to fast convergence toward equilibrium prices in a broad class of markets that satisfy the weak gross substitutes property. These are the first analyses for computationally and informationally distributed algorithms that demonstrate polynomial convergence. Our analysis identifies three parameters characterizing the markets, which govern the rate of convergence of our protocols. These parameters are, broadly speaking: 1. A bound on the fractional rate of change of demand for each good with respect to fractional changes in its price. 2. A bound on the fractional rate of change of demand for each good with respect to fractional changes in wealth. 3. The closeness of the market to a Fisher market (a market with buyers starting with money alone). We give two types of protocols. The first type assumes global knowledge of only (an upper bound on) the first parameter. For
Distributed algorithms for multicommodity flow problems via approximate steepest descent framework
 IN PROCEEDINGS OF THE ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS (SODA
, 2007
"... We consider solutions for distributed multicommodity flow problems, which are solved by multiple agents operating in a cooperative but uncoordinated manner. We show first distributed solutions that allow 1 + ǫ approximation and whose convergence time is essentially linear in the maximal path length, ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We consider solutions for distributed multicommodity flow problems, which are solved by multiple agents operating in a cooperative but uncoordinated manner. We show first distributed solutions that allow 1 + ǫ approximation and whose convergence time is essentially linear in the maximal path length, and is independent of the number of commodities and the size of the graph. Our algorithms use a very natural approximate steepest descent framework, combined with a blocking flow technique to speed up the convergence in distributed and parallel environment. Previously known solutions that achieved comparable convergence time and approximation ratio required exponential computational and space overhead per agent.