Results 1 - 10
of
12
Agent Factory: A Framework for the Engineering of Agent-Oriented Applications -- Volume 1
, 2002
"... Agent-Oriented Software Engineering (AOSE) is an emerging paradigm within industry that offers much potential of the management of the increasing levels of complexity inherent within modern software systems. For this paradigm to gain widespread acceptance, it is vital that we develop comprehensive f ..."
Abstract
-
Cited by 49 (18 self)
- Add to MetaCart
Agent-Oriented Software Engineering (AOSE) is an emerging paradigm within industry that offers much potential of the management of the increasing levels of complexity inherent within modern software systems. For this paradigm to gain widespread acceptance, it is vital that we develop comprehensive frameworks that support the development and deployment of agent-oriented applications. This thesis contributes to this through the development of a four-tier development framework entitled the Agent Factory System. This framework combines an agent programming language that is founded upon a formal agent theory of commitment; a run-time environment that delivers a set of services that support the deployment of agent-oriented applications written in this programming language; an integrated development environment that delivers a toolkit that supports the development of these applications; and a development methodology that promotes a structure approach to the use of this toolkit. Finally, we evaluate the Agent Factory System from the context of various real-world
VIPER: A VIsual Protocol EditoR
- In Proceedings of COORDINATION 2004
, 2004
"... Agent interactions play a crucial role in Multi-Agent Systems. ..."
Abstract
-
Cited by 11 (7 self)
- Add to MetaCart
(Show Context)
Agent interactions play a crucial role in Multi-Agent Systems.
Rational Agents and the Processes and States of Negotiation
, 2002
"... This thesis shows how a verified and unambiguous theory of a protocol with known properties enables rational agents to interact in a negotiation process and to finally satisfy their goals using strategies and plans. This is achieved through an application of an extended form of propositional dynamic ..."
Abstract
-
Cited by 8 (5 self)
- Add to MetaCart
This thesis shows how a verified and unambiguous theory of a protocol with known properties enables rational agents to interact in a negotiation process and to finally satisfy their goals using strategies and plans. This is achieved through an application of an extended form of propositional dynamic logic in the verification, validation and reasoning about interaction protocols in a multi-agent system. Agent interaction, as a key aspect in multi-agent systems and automated negotiation, has lead to a number of proposed agent communication languages and protocols. In contrast to a language, a rational agent can reason about a protocol to strategically plan possible courses of action in a bid to achieve its goals. Existing techniques for specifying protocols have resulted in faulty and ambiguous interaction protocols, leading to contradictory beliefs between agents. There remains a need for formally specifying and validating sharable interaction protocols with desirable properties. This thesis specifies, verifies and analyses protocols for automated negotiation through the application of Artificial Intelligence techniques.
Making it up as they go along: A Theory of Reactive Cooperation
- Agents and MultiAgent Systems – Formalisms, Methodologies, and Applications, LNAI Volume 1441
, 1998
"... . In this article, we present a formal theory of on-the-fly cooperation. This is a new model of joint action, which allows for the possibility that a group of cooperating agents will, in general, have neither the information nor the time available to compute an entire joint plan before beginning to ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
(Show Context)
. In this article, we present a formal theory of on-the-fly cooperation. This is a new model of joint action, which allows for the possibility that a group of cooperating agents will, in general, have neither the information nor the time available to compute an entire joint plan before beginning to work. It proposes that cooperating agents need therefore only reason about what to do next; what represents a believable next action. Thus, agents literally make it up as they go along: a plan only unfolds as cooperation continues. A detailed rationale is presented for the new model, and the components of the model are discussed at length. The article includes a summary of the logic used to formalise the new model, and some remarks on refinements and future research issues. 1 Introduction A common assumption in early AI planning research was that in order to achieve a goal j, an agent should first compute an entire plan p for j, and then execute p [11]. Within the AI planning community, th...
Resource-Bounded Reasoning about Knowledge
, 2001
"... The main goal of the thesis is to develop a framework for modeling resource-bounded reasoning of realistic agents and to provide formal theories of rational agency with a solid epistemic foundation. The concept "agent" has turned out to be a very useful abstraction for conceptualization in ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
The main goal of the thesis is to develop a framework for modeling resource-bounded reasoning of realistic agents and to provide formal theories of rational agency with a solid epistemic foundation. The concept "agent" has turned out to be a very useful abstraction for conceptualization in different areas of Computer science. In most agent theories, agents are treated as intensional systems which are characterized by means of mentalistic concepts like knowledge, beliefs, goals, and intentions. Among them, the epistemic concepts (knowledge and belief) are among the most important ones and have been studied most intensively. They are usually formalized using systems of modal logic. However, the modal approach to epistemic logic has a major drawback: it suffers from the so-called logical omniscience problem (LOP). It requires agents to know all logical truths and all logical consequences of their knowledge. So the modal approach to epistemic logic is not suited to formalize resource-bounded reasoning, and the issue of resource-boundedness remains one of the main foundational problems of any agent theory that is developed on the basis of modal epistemic logic. To solve the LOP I propose a strategy of modeling knowledge that takes the cost of reasoning seriously. The main intuition is that an agent may not know the consequences of his knowledge if he does not perform the necessary reasoning actions. Because reasoning requires resources, it cannot be safely assumed that the agent can compute his (implicit) knowledge if he does not have enough resources to perform the required reasoning. In modal epistemic logic, the usual form of an epistemic axiom is: "if an agent knows all premises of a valid inference rule then he also knows the conclusion". In contrast, I have argued that the correct form of epistemic axioms should be: "if an agent knows all premises of a valid inference rule and if he performs the right reasoning, then he will know the conclusion as well". This idea is captured formally using a variant of dynamic logic: the result is a family of dynamic-epistemic logics that formalizes the concept of explicit knowledge, which solve all variants of the logical omniscience problem and at the same time account for the intuition that agents are rational. Explicit knowledge can - in contrast to implicit knowledge - provide justification for actions. However, explicit knowledge alone is not enough for modeling agency. First, there are too few statements about explicit knowledge that can claim validity. Second, it is not the only kind of knowledge that agents can act upon. Certain actions not only depend on what agents currently know but also on what they can compute within specific amounts of time. If an agent needs to make a decision within 1 hour then anything that he can compute within that time is relevant for his decision. Thus, in order to predict and to explain an agent's action correctly we need a framework for describing what an agent can know under some specified resource constraints. For that purpose I have introduced a concept of knowledge which contains a direct reference to the amount of available resources. It can be described informally as follows. An agent knows p within n time units if he can compute p _reliably_ within n units of time. That is, if he chooses to compute p then he will succeed after at most n time units. The qualification "reliable" makes the agent's action predictable: agents can act upon knowledge that can be computed reliably. For formalizing this concept of knowledge, I have provided a framework that combines epistemic logic with complexity analysis. I have shown that within that framework, resource-bounded reasoning can be formalized correctly, the relationship between knowledge, reasoning, and the availability of resources can be established, the problems of traditional approaches can be avoided, and rich epistemic logics can be developed which can account adequately for our intuitions about knowledge.
On the epistemic foundations of agent theories
- In this volume
"... Abstract. We argue that none of the existing epistemic logics can adequately serve the needs of agent theories. We suggest a new concept of knowledge which generalizes both implicit and explicit knowledge and argue that this is the notion we need to formalize agents in Distributed Arti cial Intellig ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. We argue that none of the existing epistemic logics can adequately serve the needs of agent theories. We suggest a new concept of knowledge which generalizes both implicit and explicit knowledge and argue that this is the notion we need to formalize agents in Distributed Arti cial Intelligence. A logic of the new concept is developed which is formally and practically adequate in the following sense: rst, it does not su er from any kind of logical omniscience. Second, it can account for the intuition that agents are rational, though not hyper-rational. Third, it is expressive enough. The advantages of the new logic over other formalisms is demonstrated by showing that none of the existing systems can ful ll all these requirements simultaneously. 1
REACTIVE (RE) PLANNING AGENTS IN A DYNAMIC ENVIRONMENT
"... Abstract: Intelligent agents are powerful tools for complex and dynamic problems. Belief Desire Intension (BDI) is one of the most popular agent architectures for reactive goal directed agents. Planning is intrinsic for intelligent behaviour. But planning from first principle is costly in terms of ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract: Intelligent agents are powerful tools for complex and dynamic problems. Belief Desire Intension (BDI) is one of the most popular agent architectures for reactive goal directed agents. Planning is intrinsic for intelligent behaviour. But planning from first principle is costly in terms of computation time and resources. BDI agents retain their reactive property by avoiding planning from real-time planning by using predefined plan library designed by agent designers. BDI agents look for a plan in the library to achieve their goals. If the agent could find a plan it fails to achieve the goal. It would be useful to have some real-time look ahead planning capability within BDI framework. In this paper we have proposed an architecture that includes (re) planning in BDI agents. The proposed architecture describes how to integrate a real-time planner with replanning capability in the current BDI architecture. Replanning capability is important for reactive behaviour.
Knowledge, Logical Omniscience, and Resource-bounded Reasoning
"... Agent theories typically use modal epistemic logic for modeling knowledge of agents. Since the modal approach to epistemic logic cannot formalize resource-bounded reasoning adequately, it it not suited to describe realistic, implementable agents. We develop a framework for solving that problem. We i ..."
Abstract
- Add to MetaCart
(Show Context)
Agent theories typically use modal epistemic logic for modeling knowledge of agents. Since the modal approach to epistemic logic cannot formalize resource-bounded reasoning adequately, it it not suited to describe realistic, implementable agents. We develop a framework for solving that problem. We introduce the notion of algorithmic knowledge — a concept that establishes a direct link between and agent’s available resources and his knowledge. We show that our concept of knowledge is useful for modeling resource-bounded agents. Moreover, because a resource-bounded agent is naturally not logically omniscient, all variants of the logical omniscience problem are solved intuitively within our framework.
unknown title
"... Representing knowledge of resource-bounded agents We suggest a novel way for formalizing knowledge and belief of intelligent agent and argue that this is the notion we need to formalize agents in Distributed Artificial Intelligence. We introduce the notion of algorithmic knowledge — a concept that i ..."
Abstract
- Add to MetaCart
(Show Context)
Representing knowledge of resource-bounded agents We suggest a novel way for formalizing knowledge and belief of intelligent agent and argue that this is the notion we need to formalize agents in Distributed Artificial Intelligence. We introduce the notion of algorithmic knowledge — a concept that is suited for establishing a direct link between and agent’s available resources and his knowledge. We show that our concept of knowledge is useful for modeling resource-bounded agents. Moreover, because a resource-bounded agent is naturally not logically omniscient, all variants of the logical omniscience problem are solved intuitively within our framework.
Systems for Interactive Multi-Actor Spatial Planning
, 2006
"... Dit onderzoek is uitgevoerd binnen de onderzoekschool Mansholt ..."
(Show Context)