Results 1  10
of
161
Evolutionary Game Theory
, 1995
"... Abstract. Experimentalists frequently claim that human subjects in the laboratory violate gametheoretic predictions. It is here argued that this claim is usually premature. The paper elaborates on this theme by way of raising some conceptual and methodological issues in connection with the very def ..."
Abstract

Cited by 642 (9 self)
 Add to MetaCart
Abstract. Experimentalists frequently claim that human subjects in the laboratory violate gametheoretic predictions. It is here argued that this claim is usually premature. The paper elaborates on this theme by way of raising some conceptual and methodological issues in connection with the very definition of a game and of players ’ preferences, in particular with respect to potential context dependence, interpersonal preference dependence, backward induction and incomplete information.
Epistemic conditions for Nash equilibrium
, 1991
"... According to conventional wisdom, Nash equilibrium in a game “involves” common knowledge of the payoff functions, of the rationality of the players, and of the strategies played. The basis for this wisdom is explored, and it turns out that considerably weaker conditions suffice. First, note that if ..."
Abstract

Cited by 143 (6 self)
 Add to MetaCart
According to conventional wisdom, Nash equilibrium in a game “involves” common knowledge of the payoff functions, of the rationality of the players, and of the strategies played. The basis for this wisdom is explored, and it turns out that considerably weaker conditions suffice. First, note that if each player is rational and knows his own payoff function, and the strategy choices of the players are mutually known, then these choices form a Nash equilibrium. The other two results treat the mixed strategies of a player not as conscious randomization of that player, but as conjectures of the other players about what he will do. When n = 2, mutual knowledge of the payoff functions, of rationality, and of the conjectures yields Nash equilibrium. When n ≥ 3, mutual knowledge of the payoff functions and of rationality, and common knowledge of the conjectures yield Nash equilibrium when there is a common prior. Examples are provided showing these results to be sharp.
The Electronic Mail Game: Strategic Behavior under 'Almost Common Knowledge
 American Economic Review
, 1989
"... prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtai ..."
Abstract

Cited by 119 (0 self)
 Add to MetaCart
prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
A framework for sequential planning in multiagent settings
 Journal of Artificial Intelligence Research
, 2005
"... This paper extends the framework of partially observable Markov decision processes (POMDPs) to multiagent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian ..."
Abstract

Cited by 91 (27 self)
 Add to MetaCart
This paper extends the framework of partially observable Markov decision processes (POMDPs) to multiagent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian update to maintain their beliefs over time. The solutions map belief states to actions. Models of other agents may include their belief states and are related to agent types considered in games of incomplete information. We express the agents ’ autonomy by postulating that their models are not directly manipulable or observable by other agents. We show that important properties of POMDPs, such as convergence of value iteration, the rate of convergence, and piecewise linearity and convexity of the value functions carry over to our framework. Our approach complements a more traditional approach to interactive settings which uses Nash equilibria as a solution paradigm. We seek to avoid some of the drawbacks of equilibria which may be nonunique and are not able to capture offequilibrium behaviors. We do so at the cost of having to represent, process and continually revise models of other agents. Since the agent’s beliefs may be arbitrarily nested the optimal solutions to decision making problems are only asymptotically computable. However, approximate belief updates and approximately optimal plans are computable. We illustrate our framework using a simple application domain, and we show examples of belief updates and value functions. 1.
A Rigorous, Operational Formalization of Recursive Modeling
, 1995
"... We present a formalization of the Recursive Modeling Method, which we have previously, somewhat informally, proposed as a method that autonomous artificial agents can use for intelligent coordination and communication with other agents. Our formalism is closely related to models proposed in the area ..."
Abstract

Cited by 74 (15 self)
 Add to MetaCart
We present a formalization of the Recursive Modeling Method, which we have previously, somewhat informally, proposed as a method that autonomous artificial agents can use for intelligent coordination and communication with other agents. Our formalism is closely related to models proposed in the area of game theory, but contains new elements that lead to a different solution concept. The advantage of our solution method is that always yields the optimal solution, which is the rational action of the agent in a multiagent environment, given the agent's state of knowledge and its preferences, and that it works in realistic cases when agents have only a finite amount of information about the agents they interact with. Introduction Since its initial conceptual development several years ago (Gmytrasiewicz, Durfee, & Wehe 1991a; 1991b), the Recursive Modeling Method (RMM) has provided a powerful decisiontheoretic underpinning for coordination and communication decisionmaking, including dec...
A ModelTheoretic Analysis of Knowledge
 in Proc. 25th IEEE Symposium on Foundations of Computer Science
, 1988
"... Understanding knowledge is a fundamental issue in many disciplines. In computer science, knowledge arises not only in the obvious contexts (such as knowledgebased systems), but also in distributed systems (where the goal is to have each processor "know" something, as in agreement protocols). A ge ..."
Abstract

Cited by 57 (11 self)
 Add to MetaCart
Understanding knowledge is a fundamental issue in many disciplines. In computer science, knowledge arises not only in the obvious contexts (such as knowledgebased systems), but also in distributed systems (where the goal is to have each processor "know" something, as in agreement protocols). A general semantic model of knowledge is introduced, to allow reasoning about statements such as "He knows that I know whether or not she knows whether or not it is raining." This approach more naturally models a state of knowledge than previous proposals (including Kripke structures). Using this notion of model, a model theory for knowledge is developed. This theory enables one to interpret the notion of a "finite amount of information". A preliminary version of this paper appeared in Proc. 25th IEEE Symp. on Foundations of Computer Science, 1984, pp. 268278. This version is essentially identical to the version that appears in Journal of the ACM 38:2, 1991, pp. 382428. y Part of th...
Multiagent reinforcement learning: a critical survey
, 2003
"... We survey the recent work in AI on multiagent reinforcement learning (that is, learning in stochastic games). We then argue that, while exciting, this work is flawed. The fundamental flaw is unclarity about the problem or problems being addressed. After tracing a representative sample of the recent ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
We survey the recent work in AI on multiagent reinforcement learning (that is, learning in stochastic games). We then argue that, while exciting, this work is flawed. The fundamental flaw is unclarity about the problem or problems being addressed. After tracing a representative sample of the recent literature, we identify four welldefined problems in multiagent reinforcement learning, single out the problem that in our view is most suitable for AI, and make some remarks about how we believe progress is tobemadeonthisproblem. 1
A revelation principle for competing mechanisms
 JOURNAL OF ECONOMIC THEORY
, 1999
"... In modelling competition among mechanism designers, it is necessary to specify the set of feasible mechanisms. These specifications are often borrowed from the optimal mechanism design literature and exclude mechanisms that are natural in a competitive environment; for example, mechanisms that depen ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
In modelling competition among mechanism designers, it is necessary to specify the set of feasible mechanisms. These specifications are often borrowed from the optimal mechanism design literature and exclude mechanisms that are natural in a competitive environment; for example, mechanisms that depend on the mechanisms chosen by competitors. This paper constructs a set of mechanisms that is universal in that any specific model of the feasible set can be embedded in it. An equilibrium for a specific model is robust if and only if it is an equilibrium also for the universal set of mechanisms. A key to the construction is a language for describing mechanisms that is not tied to any preconceived notions of the nature of competition.
Topologyfree typology of beliefs
 Journal of Econ. Theory
, 1998
"... In their seminal paper, Mertens and Zamir (1985) proved the existence of a universal Harsanyi type space which consists of all possible types. Their method of proof depends crucially on topological assumptions. Whether such assumptions are essential to the existence of a universal space remained an ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
In their seminal paper, Mertens and Zamir (1985) proved the existence of a universal Harsanyi type space which consists of all possible types. Their method of proof depends crucially on topological assumptions. Whether such assumptions are essential to the existence of a universal space remained an open problem. We answer it here by proving that a universal type space does exist even when spaces are defined in pure measure theoretic terms. Heifetz and Samet (1996) showed that coherent hierarchies of beliefs, in the measure theoretic case, do not necessarily describe types. Therefore, the universal space here differs from all previously studied ones, in that it does not necessarily consist of all We study here the foundations of that part of the theory of games with incomplete information that deals with players ’ beliefs. We study it in the broadest and most natural setup, that of probability (or measure) theory without any topological notions, which have always been used for this purpose until now. We show that even under this general setup