Results 1 - 10
of
13
Incorporating Helpful Behavior into Collaborative Planning
"... This paper considers the design of agent strategies for deciding whether to help other members of a group with whom an agent is engaged in a collaborative activity. Three characteristics of collaborative planning must be addressed by these decision-making strategies: agents may have only partial inf ..."
Abstract
-
Cited by 16 (6 self)
- Add to MetaCart
(Show Context)
This paper considers the design of agent strategies for deciding whether to help other members of a group with whom an agent is engaged in a collaborative activity. Three characteristics of collaborative planning must be addressed by these decision-making strategies: agents may have only partial information about their partners ’ plans for sub-tasks of the collaborative activity; the effectiveness of helping may not be known a priori; and, helping actions have some associated cost. The paper proposes a novel probabilistic representation of other agents ’ beliefs about the recipes selected for their own or for the group activity, given partial information. This representation is compact, and thus makes reasoning about helpful behavior tractable. The paper presents a decision-theoretic mechanism that uses this representation
Group Intention is Social Choice with Commitment
"... Abstract. A collaborative group is commonly defined as a set of agents, which share information and coordinate activities while working towards a common goal. How do groups decide which are their common goals and what to intend? According to the much cited theory of Cohen and Levesque, an agent inte ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract. A collaborative group is commonly defined as a set of agents, which share information and coordinate activities while working towards a common goal. How do groups decide which are their common goals and what to intend? According to the much cited theory of Cohen and Levesque, an agent intends g if it has chosen to pursue goal g and it has committed itself to making g happen. Following the same line of reasoning, a group intention should be a collectively chosen goal with commitment. The literature often considers a collective goal to be one of those individual goals that are shared by all members. This approach presumes that a group goal is also an individual one and that the agents can act as a group if they share the beliefs relevant to this goal. This is not necessarily the case. We construct an abstract framework for groups in which common goals are determined by social choice. Our framework uses judgment aggregation to choose a group goal and a multi-modal multi-agent logic to define commitment and revision strategies for the group intentions. 1
Dynamic intention structures I: a theory of intention representation
- AUTON AGENT MULTI-AGENT SYST
"... ..."
unknown title
"... Agents intentionality, capabilities and the performance of systems of innovation ..."
Abstract
- Add to MetaCart
Agents intentionality, capabilities and the performance of systems of innovation
TRUSTED BELIEFS FOR HELPFUL BEHAVIOR WHEN BUILDING WEB SERVICES ∗
"... Abstract. Composite software services often present uncertainty over their non-functional properties. To tackle this, one could model them as shared goals of an agent team which aims at maximizing the likelihood of success of the joint task. An architect is in charge with picking services and provid ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Composite software services often present uncertainty over their non-functional properties. To tackle this, one could model them as shared goals of an agent team which aims at maximizing the likelihood of success of the joint task. An architect is in charge with picking services and providers, while a consultant helps him, when possible, by suggesting alternative approaches. The multinomial version of the ”Belief Recipe Tree ” structure relies on beliefs built based upon prior mutual experiences of the consultant with various providers and/or abstract plans and revised after each interaction, exhibiting a higher flexibility in several web service building scenarios.
unknown title
"... Abstract. This paper addresses an important problem in multi-agent coordination: the formal representation of parameters in the content of agent intentions that are only partially specified (e.g., when the intended action has not yet been executed and values for the parameters have not yet been chos ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. This paper addresses an important problem in multi-agent coordination: the formal representation of parameters in the content of agent intentions that are only partially specified (e.g., when the intended action has not yet been executed and values for the parameters have not yet been chosen or the authority for choosing such values has been delegated to others). For example, Abe might intend to rent “whatever car Zoe tells him to”, in which case the problem is how to formally represent the quoted clause (i.e., the “whatever ” content). The paper presents a two-pronged approach. First, it uses the event calculus to model declarative speech-acts which agents use to establish facts about parameters in a social context. Second, it partitions the content of agent intentions into (1) a condition that the agent should refrain from determining and (2) a goal that the agent should strive to achieve. The satisfaction conditions of such intentions treat these types of content differently; however they can share variables and, thus, are linked in a restricted sense. 1
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Accommodating Human Variability in Human-Robot Teams through Theory of Mind
"... The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this paper, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human team ..."
Abstract
- Add to MetaCart
The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this paper, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human teammate, a robot uses a simulation analysis of different hypothetical cognitive models of the human to identify the most likely cause for the human’s behavior. This allows the cognitive robot to account for variances due to both different knowledge and beliefs about the world, as well as different possible paths the human could take with a given set of knowledge and beliefs. An experiment showed that cognitive robots equipped with this functionality are viewed as both more natural and intelligent teammates, compared to both robots who either say nothing when presented with human variability, and robots who simply point out any discrepancies between the human’s expected, and actual, behavior. Overall, this analysis leads to an effective, general approach for determining what thought process is leading to a human’s actions. 1
Using Socially Deliberating Agents in Organized Settings *
"... Abstract. Recently there is an increased interest in social agency and in designing and building organizations of agents. In this paper we view an organization as an interrelated set of groups. Each group has an explicit structure in terms of positions and their interrelations. Agents in groups deli ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Recently there is an increased interest in social agency and in designing and building organizations of agents. In this paper we view an organization as an interrelated set of groups. Each group has an explicit structure in terms of positions and their interrelations. Agents in groups deliberate socially, distinguishing between their individual and group attitudes: Each agent is able to agree on and accept certain attitudes as attitudes of the groups it belongs. Acting as group members, agents must be able to act on the basis of these group mental attitudes rather than on the basis of their individual beliefs. This issue, although ultimately important, it has not given much attention in agent community. The objective of this paper is (a) to propose a generic design pattern for building agent organizations in which the constituting groups build and maintain their own group goals and beliefs according to their needs and the environmental conditions, (b) to present the functionality of social deliberating agents that act as group members in organized settings, and (c) to report on the development of a prototype system that comprises agents that implement such a kind of social deliberation. 1
Towards Reasoning with Partial Goal Satisfaction in Intelligent Agents
"... Abstract. A model of agency that supposes goals are either achieved fully or not achieved at all can be a poor approximation of scenarios aris-ing from the real world. In real domains of application, goals are achieved over time. At any point, a goal has reached a certain level of satisfac-tion, fro ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. A model of agency that supposes goals are either achieved fully or not achieved at all can be a poor approximation of scenarios aris-ing from the real world. In real domains of application, goals are achieved over time. At any point, a goal has reached a certain level of satisfac-tion, from nothing to full (completely achieved). This paper presents an abstract framework that can be taken as a basis for representing partial goal satisfaction in an intelligent agent. The richer representation enables agents to reason about partial satisfaction of the goals they are pursu-ing or that they are considering. In contrast to prior work on partial satisfaction in the agents literature which investigates partiality from a logical perspective, we propose a higher-level framework based on metric functions that represent, among other things, the progress that has been made towards achieving a goal. We present an example to illustrate the kinds of reasoning enabled on the basis of our framework for partial goal satisfaction.