Results 1  10
of
11
Sampling best response dynamics and deterministic equilibrium selection, mimeo
, 2011
"... We consider a model of evolution in games in which a revising agent observes the actions of a random number of randomly sampled opponents and then chooses a best response to the distribution of actions in the sample. We provide a condition on the distribution of sample sizes under which an iterated ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We consider a model of evolution in games in which a revising agent observes the actions of a random number of randomly sampled opponents and then chooses a best response to the distribution of actions in the sample. We provide a condition on the distribution of sample sizes under which an iterated pdominant equilibrium is almost globally asymptotically stable under these dynamics. We show under an additional condition on the sample size distribution that in supermodular games, an almost globally asymptotically stable state must be an iterated pdominant equilibrium. Since our selection results are for deterministic dynamics, any selected equilibrium is reached quickly; the long waiting times associated with equilibrium selection in stochastic stability models are absent. 1.
Deterministic Equations for Stochastic Spatial Evolutionary Games 1
"... Spatial evolutionary games model individuals playing a game with their neighbors in a spatial domain and describe the time evolution of strategy profile of individuals over space. We derive integrodifferential equations as deterministic approximations of strategy revision stochastic processes. Thes ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Spatial evolutionary games model individuals playing a game with their neighbors in a spatial domain and describe the time evolution of strategy profile of individuals over space. We derive integrodifferential equations as deterministic approximations of strategy revision stochastic processes. These equations generalize the existing ordinary differential equations such as replicator dynamics and provide powerful tools for investigating the problem of equilibrium selection. Deterministic equations allow the identification of many interesting features of the evolution of a population’s strategy profiles, including traveling front solutions and pattern formation.
Population games and deterministic evolutionary dynamics
, 2014
"... Population games describe strategic interactions among large numbers of small, anonymous agents. Behavior in these games is typically modeled dynamically, with agents occasionally receiving opportunities to switch strategies, basing their choices on simple myopic rules called revision protocols. Ove ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Population games describe strategic interactions among large numbers of small, anonymous agents. Behavior in these games is typically modeled dynamically, with agents occasionally receiving opportunities to switch strategies, basing their choices on simple myopic rules called revision protocols. Over finite time spans the evolution of aggregate behavior is well approximated by the solution of a differential equation. From a different point of view, every revision protocol defines a map—a deterministic evolutionary dynamic—that assigns each population game a differential equation describing the evolution of aggregate behavior in that game. In this chapter, we provide an overview of the theory of population games and deterministic evolutionary dynamics. We introduce population games through a series of examples and illustrate their basic geometric properties. We formally derive deterministic evolutionary dynamics from revision protocols, introduce the main families of dynamics—imitative/biological, best response, comparison to average payoffs, and pairwise comparison—and discuss their basic properties. Combining these streams, we consider classes of population games in which members of these families of dynamics converge to equilibrium; these classes include potential games, contractive games, games solvable by iterative solution concepts, and supermodular games. We relate these classes to the classical notion of an evolutionarily stable state (ESS) and to recent work on deterministic equilibrium selection. We present a variety of examples of cycling and chaos under evolutionary dynamics, as well as a general result on survival of strictly dominated strategies. Finally, we provide connections to other approaches to game dynamics, and indicate applications of evolutionary game dynamics to economics and social science.
Game Theory and Distributed Control
, 2012
"... Game theory has been employed traditionally as a modeling tool for describing and influencing behavior in societal systems. Recently, game theory has emerged as a valuable tool for controlling or prescribing behavior in distributed engineered systems. The rationale for this new perspective stems fro ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Game theory has been employed traditionally as a modeling tool for describing and influencing behavior in societal systems. Recently, game theory has emerged as a valuable tool for controlling or prescribing behavior in distributed engineered systems. The rationale for this new perspective stems from the parallels between the underlying decision making architectures in both societal systems and distributed engineered systems. In particular, both settings involve an interconnection of decision making elements whose collective behavior depends on a compilation of local decisions that are based on partial information about each other and the state of the world. Accordingly, there is extensive work in game theory that is relevant to the engineering agenda. Similarities notwithstanding, there remain important differences between the constraints and objectives in societal and engineered systems that require looking at game theoretic methods from a new perspective. This chapter provides an overview of selected recent developments of game theoretic methods in this role as a framework for distributed control in engineered systems.
Rapid Innovation Diffusion in Social Networks
, 2013
"... The diffusion of an innovation can be represented by a process in which agents choose perturbed best responses to what their neighbors are currently doing. Diffusion is said to be fast if the expected waiting time until the innovation spreads widely is bounded above independently of the size of the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The diffusion of an innovation can be represented by a process in which agents choose perturbed best responses to what their neighbors are currently doing. Diffusion is said to be fast if the expected waiting time until the innovation spreads widely is bounded above independently of the size of the network. Previous work has identified specific topological properties of networks that guarantee fast diffusion. Here we apply martingale theory to derive topologyfree bounds such that diffusion is fast whenever the payoff gain from the innovation is sufficiently high and the response function is sufficiently noisy. We also provide a simple method for computing an upper bound on the expected waiting time that holds for all networks. For example, under the logit response function it takes on average less than 80 revisions per capita for the innovation to diffuse widely in any network, provided that the error rate is at least 5% and the payoff gain (relative to the status quo) is at least 150%. Qualitatively similar results hold for other smoothed best response functions and populations that experience heterogeneous payoff shocks.
Fast convergence in evolutionary models: A Lyapunov approach∗
, 2014
"... Evolutionary models in which N players are repeatedly matched to play a game have “fast convergence ” to a set A if the models both reach A quickly and leave A slowly, where “quickly ” and “slowly ” refer to whether the expected hitting and exit times remain finite in the N → ∞ limit. We provide Ly ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Evolutionary models in which N players are repeatedly matched to play a game have “fast convergence ” to a set A if the models both reach A quickly and leave A slowly, where “quickly ” and “slowly ” refer to whether the expected hitting and exit times remain finite in the N → ∞ limit. We provide Lyapunov criteria which are sufficient for reaching quickly and leaving slowly, and apply them to a number of examples to illustrate how fast convergence depends on factors such as the degree of riskdominance, noise levels, the nature of the decision rule, and the nature of the information on which players base their decisions.
Stochastic Learning Dynamics and Speed of Convergence in Population Games * Itai Arieli
"... Abstract We study how long it takes for large populations of interacting agents to come close to Nash equilibrium when they adapt their behavior using a stochastic better reply dynamic. Prior work considers this question mainly for 2×2 games and potential games; here we characterize convergence tim ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We study how long it takes for large populations of interacting agents to come close to Nash equilibrium when they adapt their behavior using a stochastic better reply dynamic. Prior work considers this question mainly for 2×2 games and potential games; here we characterize convergence times for general weakly acyclic games, including coordination games, dominance solvable games, games with strategic complementarities, potential games, and many others with applications in economics, biology, and distributed control. If players' better replies are governed by idiosyncratic shocks, the convergence time can grow exponentially in the population size; moreover this is true even in games with very simple payoff structures. However, if their responses are sufficiently correlated due to aggregate shocks, the convergence time is greatly accelerated; in fact it is bounded for all sufficiently large populations. We provide explicit bounds on the speed of convergence as a function of key structural parameters including the number of strategies, the length of the better reply paths, the extent to which players can influence the payoffs of others, and the desired degree of approximation to Nash equilibrium. * The authors thank
Analysis and Control of Strategic Interactions in Finite Heterogeneous Populations under BestResponse Update Rule Ramazi, Pouria; Cao, Ming Analysis and Control of Strategic Interactions in Finite Heterogeneous Populations under BestResponse Update Rule
"... AbstractFor a finite, wellmixed population of heterogeneous agents playing evolutionary games choosing to cooperate or defect in each round of the game, we investigate, when agents update their strategies in each round using the myopic bestresponse rule, how the number of cooperating agents chan ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractFor a finite, wellmixed population of heterogeneous agents playing evolutionary games choosing to cooperate or defect in each round of the game, we investigate, when agents update their strategies in each round using the myopic bestresponse rule, how the number of cooperating agents changes over time and demonstrate how to control that number by changing the agents' payoff matrices. The agents are heterogeneous in that their payoff matrices may differ from one another; we focus on the specific case when the payoff matrices, fixed throughout the evolution, correspond to prisoner's dilemma or snowdrift games. To carry out stability analysis, we identify the system's absorbing states when taking the number of cooperating agents as a random variable of interest. It is proven that when all the agents update frequently enough, the reachable final states are completely determined by the available types of payoff matrices. As a further step, we show how to control the final state by changing at the beginning of the evolution, the types of the payoff matrices of a group of agents.
SANTA FE INSTITUTECulturalInstitutional Persistence under Autarchy, International Trade, and Factor Mobility ∗
, 2013
"... SFI Working Papers contain accounts of scienti5ic work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer‐reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for pa ..."
Abstract
 Add to MetaCart
SFI Working Papers contain accounts of scienti5ic work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer‐reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for papers by our external faculty, papers must be based on work done at SFI, inspired by an invited visit to or collaboration at SFI, or funded by an SFI grant. ©NOTICE: This working paper is included by permission of the contributing author(s) as a means to ensure timely distribution of the scholarly and technical work on a non‐commercial basis. Copyright and all rights therein are maintained by the author(s). It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may be reposted only with the explicit permission of the copyright holder. www.santafe.edu
Fast Convergence in SemiAnonymous Potential Games
"... AbstractLoglinear learning has been extensively studied in both the game theoretic and distributed control literature. A central appeal of loglinear learning for distributed control of multiagent systems is that this algorithm often guarantees that the agents' collective behavior will conve ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractLoglinear learning has been extensively studied in both the game theoretic and distributed control literature. A central appeal of loglinear learning for distributed control of multiagent systems is that this algorithm often guarantees that the agents' collective behavior will converge in probability to the optimal configuration. However, the worst case convergence time can be prohibitively long, e.g., exponential in the number of players. Building off the work in