Results 1  10
of
98
Intelligent agents: Theory and practice
 The Knowledge Engineering Review
, 1995
"... The concept of an agent has become important in both Artificial Intelligence (AI) and mainstream computer science. Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent age ..."
Abstract

Cited by 1104 (80 self)
 Add to MetaCart
The concept of an agent has become important in both Artificial Intelligence (AI) and mainstream computer science. Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent agents. For convenience, we divide these issues into three areas (though as the reader will see, the divisions are at times somewhat arbitrary). Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents. Agent architectures can be thought of as software engineering models of agents; researchers in this area are primarily concerned with the problem of designing software or hardware systems that will satisfy the properties specified by agent theorists. Finally, agent languages are software systems for programming and experimenting with agents; these languages may embody principles proposed by theorists. The paper is not intended to serve as a tutorial introduction to all the issues mentioned; we hope instead simply to identify the most important issues, and point to work that elaborates on them. The article includes a short review of current and potential applications of agent technology.
The knowledge complexity of interactive proof systems
 in Proc. 27th Annual Symposium on Foundations of Computer Science
, 1985
"... Abstract. Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/nonHamiltoni ..."
Abstract

Cited by 1051 (39 self)
 Add to MetaCart
Abstract. Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/nonHamiltonian. In this paper a computational complexity theory of the "knowledge " contained in a proof is developed. Zeroknowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zeroknowledge proof systems are given for the languages of quadratic residuosity and quadratic nonresiduosity. These are the first examples of zeroknowledge proofs for languages not known to be efficiently recognizable. Key words, cryptography, zero knowledge, interactive proofs, quadratic residues AMS(MOS) subject classifications. 68Q15, 94A60 1. Introduction. It is often regarded that saying a language L is in NP (that is, acceptable in nondeterministic polynomial time) is equivalent to saying that there is a polynomial time "proof system " for L. The proof system we have in mind is one where on input x, a "prover " creates a string a, and the "verifier " then computes on x and a in time polynomial in the length of the binary representation of x to check that
Reaching Agreements Through Argumentation: A Logical Model and Implementation
 Artificial Intelligence
, 1998
"... In a multiagent environment, where selfmotivated agents try to pursue their own goals, cooperation cannot be taken for granted. Cooperation must be planned for and achieved through communication and negotiation. We present a logical model of the mental states of the agents based on a representatio ..."
Abstract

Cited by 226 (11 self)
 Add to MetaCart
In a multiagent environment, where selfmotivated agents try to pursue their own goals, cooperation cannot be taken for granted. Cooperation must be planned for and achieved through communication and negotiation. We present a logical model of the mental states of the agents based on a representation of their beliefs, desires, intentions, and goals. We present argumentation as an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions. We look at argumentation as a mechanism for achieving cooperation and agreements. Using categories identified from human multiagent negotiation, we demonstrate how the logic can be used to specify argument formulation and evaluation. We also illustrate how the developed logic can be used to describe different types of agents. Furthermore, we present a general Automated Negotiation Agent which we implemented, based on the logical model. Using this system, a user can analyze and explore differe...
Reasoning Situated in Time I: Basic Concepts
, 1990
"... The needs of a realtime reasoner situated in an environment may make it appropriate to view errorcorrection and nonmonotonicity as much the same thing. This has led us to formulate situated (or step) logic, an approach to reasoning in which the formalism has a kind of realtime selfreference tha ..."
Abstract

Cited by 93 (42 self)
 Add to MetaCart
The needs of a realtime reasoner situated in an environment may make it appropriate to view errorcorrection and nonmonotonicity as much the same thing. This has led us to formulate situated (or step) logic, an approach to reasoning in which the formalism has a kind of realtime selfreference that affects the course of deduction itself. Here we seek to motivate this as a useful vehicle for exploring certain issues in commonsensereasoning. In particular, a chief drawback of more traditional logics is avoided: from a contradiction we do not have all wffs swamping the (growing) conclusion set. Rather, we seek potentially inconsistent, but nevertheless useful, logics where the realtime selfreferential feature allows a direct contradiction to be spotted and corrective action taken, as part of the same system of reasoning. Some specific inference mechanisms for realtime default reasoning are suggested, notably a form of introspection relevant to default reasoning. Special treatment of ...
Alternative Semantics for Unawareness
 Games and Economic Behavior
, 2001
"... Modica and Rustichini [1994] provided a logic for reasoning about knowledge where agents may be unaware of certain propositions. However, their original approach had the unpleasant property that nontrivial unawareness was incompatible with partitional information structures. More recently, Modica an ..."
Abstract

Cited by 55 (9 self)
 Add to MetaCart
Modica and Rustichini [1994] provided a logic for reasoning about knowledge where agents may be unaware of certain propositions. However, their original approach had the unpleasant property that nontrivial unawareness was incompatible with partitional information structures. More recently, Modica and Rustichini [1999] have provided an approach that allows for nontrivial unawareness in partitional information structures. Here it is shown that their approach can be viewed as a special case of a general approach to unawareness
NonOmniscient Belief as ContextBased Reasoning
 In Proc. of the 13th International Joint Conference on Artificial Intelligence
, 1993
"... This paper describes a general framework for the formalization of monotonic reasoning about belief in a multiagent environment. The agents* beliefs are modeled as logical theories. The reasoning about their beliefs is formalized in still another theory, which we call the theory of the computer. The ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
This paper describes a general framework for the formalization of monotonic reasoning about belief in a multiagent environment. The agents* beliefs are modeled as logical theories. The reasoning about their beliefs is formalized in still another theory, which we call the theory of the computer. The framework is used to model nonomniscient belief and shown to have many advantages. For instance, it allows for an exhaustive classification of the "basic " forms of non logical omniscience and for their "composition" into the structure of the system modeling multiagent omniscient belief. 1 The approach This paper describes a general framework for the formalization of monotonic reasoning about belief in a multiagent environment. The most common solution is to take a first order (propositional) theory, to extend it using a set of modal operators, and to take as meaning that an agent believes A (see for instance [Halpern and Moses, 1985]). There is only one theory of the world, however this theory proves facts about the agents ' beliefs. According to a first interpretation, this theory is taken to model things how they really are. It is therefore a finite (and possibly incomplete) presentation of what is true in the world, and the fact that B i A is a theorem means that it is, in fact, the case that a i believes A. According to another interpretation, this theory is taken to be the perspective that a generic reasoner has of the world. It is therefore a finite presentation of the reasoner's beliefs, and the fact that A is a theorem means that the reasoner believes that believes A. Once one accepts the second interpretation (as we do), a mechanized theory is naturally taken as representing the beliefs of the computer where it is implemented. Moreover, in the case of multiagent belief, a further step is to have, together with the theory of the computer, one theory (at least, see later) for each agent.
Active Logics: A Unified Formal Approach to Episodic Reasoning
"... Artificial intelligence research falls roughly into two categories: formal and implementational. This division is not completely firm: there are implementational studies based on (formal or informal) theories (e.g., CYC, SOAR, OSCAR), and there are theories framed with an eye toward implementabili ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
Artificial intelligence research falls roughly into two categories: formal and implementational. This division is not completely firm: there are implementational studies based on (formal or informal) theories (e.g., CYC, SOAR, OSCAR), and there are theories framed with an eye toward implementability (e.g., predicate circumscription). Nevertheless, formal /theoretical work tends to focus on very narrow problems (and even on very special cases of very narrow problems) while trying to get them "right" in a very strict sense, while implementational work tends to aim at fairly broad ranges of behavior but often at the expense of any kind of overall conceptually unifying framework that informs understanding. It is sometimes urged that this gap is intrinsic to the topic: intelligence is not a unitary thing for which there will be a unifying theory, but rather a "society" of subintelligences whose overall behavior cannot be reduced to useful characterizing and predictive principles.
The Logic of Justification
 Cornell University
, 2008
"... We describe a general logical framework, Justification Logic, for reasoning about epistemic justification. Justification Logic is based on classical propositional logic augmented by justification assertions t:F that read t is a justification for F. Justification Logic absorbs basic principles origin ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
We describe a general logical framework, Justification Logic, for reasoning about epistemic justification. Justification Logic is based on classical propositional logic augmented by justification assertions t:F that read t is a justification for F. Justification Logic absorbs basic principles originating from both mainstream epistemology and the mathematical theory of proofs. It contributes to the studies of the wellknown Justified True Belief vs. Knowledge problem. We state a general Correspondence Theorem showing that behind each epistemic modal logic, there is a robust system of justifications. This renders a new, evidencebased foundation for epistemic logic. As a case study, we offer a resolution of the GoldmanKripke ‘Red Barn ’ paradox and analyze Russell’s ‘prime minister example ’ in Justification Logic. Furthermore, we formalize the wellknown Gettier example and reveal hidden assumptions and redundancies in Gettier’s reasoning. 1
Provability logic
 Handbook of Philosophical Logic, 2nd ed
, 2004
"... We describe a general logical framework, Justification Logic, for reasoning about epistemic justification. Justification Logic is based on classical propositional logic augmented by justification assertions t:F that read t is a justification for F. Justification Logic absorbs basic principles origin ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
We describe a general logical framework, Justification Logic, for reasoning about epistemic justification. Justification Logic is based on classical propositional logic augmented by justification assertions t:F that read t is a justification for F. Justification Logic absorbs basic principles originating from both mainstream epistemology and the mathematical theory of proofs. It contributes to the studies of the wellknown Justified True Belief vs. Knowledge problem. As a case study, we formalize Gettier examples in Justification Logic and reveal hidden assumptions and redundancies in Gettier reasoning. We state a general Correspondence Theorem showing that behind each epistemic modal logic, there is a robust system of justifications. This renders a new, evidencebased foundation for epistemic logic. 1
A Complete and Decidable Logic for ResourceBounded Agents
 in Proceedings of the Third International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2004
, 2004
"... We propose a contextlogic style formalism, Timed Reasoning Logics (TRL), to describe resourcebounded reasoners who take time to derive consequences of their knowledge. The semantics of TRL is grounded in the agent's computation, allowing an unambiguous ascription of the set of formulas which the a ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
We propose a contextlogic style formalism, Timed Reasoning Logics (TRL), to describe resourcebounded reasoners who take time to derive consequences of their knowledge. The semantics of TRL is grounded in the agent's computation, allowing an unambiguous ascription of the set of formulas which the agent actually knows at time t. We show that TRL can capture various rule application and conflict resolution strategies that a rulebased agent may employ, and analyse two examples in detail: TRL(STEP) which models an all rules at each cycle strategy similar to that assumed in step logic [4], and TRL(CLIPS) which models a single rule at each cycle strategy similar to that employed by the CLIPS [21] rule based system architecture. We prove a general completeness and decidability results for TRL(STEP).