Results 1 
8 of
8
Distributed problem solving
 AI Magazine
, 2012
"... Broadly, distributed problem solving is a subfield withinmultiagent systems, where the focus is to enable multipleagents to work together to solve a problem. These agents are often assumed to be cooperative, that is, they are part of a team or they are selfinterested but incentives or disincentives ..."
Abstract

Cited by 17 (13 self)
 Add to MetaCart
(Show Context)
Broadly, distributed problem solving is a subfield withinmultiagent systems, where the focus is to enable multipleagents to work together to solve a problem. These agents are often assumed to be cooperative, that is, they are part of a team or they are selfinterested but incentives or disincentives have been applied such that the individual agent rewards are aligned with the team reward. We illustrate the motivations for distributed problem solving with an example. Imagine a decentralized channelallocation problem in a wireless local area network (WLAN), where each access point (agent) in the WLAN needs to allocate itself a channel to broadcast such that no two access points with overlapping broadcast regions (neighboring agents) are allocated the same channel to avoid interference. Figure 1 shows example mobile WLAN access points, where each access point is a Create robot fitted with a wireless CenGen radio card. Figure 2a shows an illustration of such a problem with three access points in a WLAN, where each oval ring represents the broadcast region of an access point. This problem can, in principle, be solved with a centralized approach by having each and every agent transmit all the relevant information, that is, the set of possible channels that the agent can allocate itself and its set of neighboring agents, to a centralized server. However, this centralized approach may incur unnecessary communication cost compared to a distrib
Distributed Gibbs: A MemoryBounded SamplingBased DCOP Algorithm
"... Researchers have used distributed constraint optimization problems (DCOPs) to model various multiagent coordination and resource allocation problems. Very recently, Ottens et al. proposed a promising new approach to solve DCOPs that is based on confidence bounds via their Distributed UCT (DUCT) sam ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
(Show Context)
Researchers have used distributed constraint optimization problems (DCOPs) to model various multiagent coordination and resource allocation problems. Very recently, Ottens et al. proposed a promising new approach to solve DCOPs that is based on confidence bounds via their Distributed UCT (DUCT) samplingbased algorithm. Unfortunately, its memory requirement per agent is exponential in the number of agents in the problem, which prohibits it from scaling up to large problems. Thus, in this paper, we introduce a new samplingbased DCOP algorithm called Distributed Gibbs, whose memory requirements per agent is linear in the number of agents in the problem. Additionally, we show empirically that our algorithm is able to find solutions that are better than DUCT; and computationally, our algorithm runs faster than DUCT as well as solve some large problems that DUCT failed to solve due to memory limitations.
Stochastic dominance in stochastic DCOPs for risksensitive applications
 In Proceedings of AAMAS
, 2012
"... Distributed constraint optimization problems (DCOPs) are wellsuited for modeling multiagent coordination problems where the primary interactions are between local subsets of agents. However, one limitation of DCOPs is the assumption that the constraint rewards are without uncertainty. Researchers ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Distributed constraint optimization problems (DCOPs) are wellsuited for modeling multiagent coordination problems where the primary interactions are between local subsets of agents. However, one limitation of DCOPs is the assumption that the constraint rewards are without uncertainty. Researchers have thus extended DCOPs to Stochastic DCOPs (SDCOPs), where rewards are sampled from known probability distribution reward functions, and introduced algorithms to find solutions with the largest expected reward. Unfortunately, such a solution might be very risky, that is, very likely to result in a poor reward. Thus, in this paper, we make three contributions: (1) we propose a stricter objective for SDCOPs, namely to find a solution with the most stochastically dominating probability distribution reward function; (2) we introduce an algorithm to find such solutions; and (3) we show that stochastically dominating solutions can indeed be less risky than expected reward maximizing solutions.
Decentralized multiagent reinforcement learning in averagereward dynamic dcops
 In Proceedings of the 2014 international conference on Autonomous agents and multiagent systems, 1341–1342. International Foundation for Autonomous Agents and Multiagent Systems
, 2014
"... ABSTRACT Researchers have introduced the Dynamic Distributed Constraint Optimization Problem (Dynamic DCOP) formulation to model dynamically changing multiagent coordination problems, where a dynamic DCOP is a sequence of (static canonical) DCOPs, each partially different from the DCOP preceding i ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
ABSTRACT Researchers have introduced the Dynamic Distributed Constraint Optimization Problem (Dynamic DCOP) formulation to model dynamically changing multiagent coordination problems, where a dynamic DCOP is a sequence of (static canonical) DCOPs, each partially different from the DCOP preceding it. Existing work typically assumes that the problem in each time step is decoupled from the problems in other time steps, which might not hold in some applications. In this paper, we introduce a new model, called Markovian Dynamic DCOPs (MDDCOPs), where a DCOP is a function of the value assignments in the preceding DCOP. We also introduce a distributed reinforcement learning algorithm that balances exploration and exploitation to solve MDDCOPs in an online manner.
Robust Distributed Constraint Reasoning
"... Abstract. Distributed constraint reasoning (DCR) has recently generated much interest due to its ability to solve many real world problems without centralizing all of the information. Many DCR algorithms, however, are prone to failure if even a single agent fails, creating a situation with not only ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Distributed constraint reasoning (DCR) has recently generated much interest due to its ability to solve many real world problems without centralizing all of the information. Many DCR algorithms, however, are prone to failure if even a single agent fails, creating a situation with not only a central point of failure, but with npoints of failure! There are three main contributions of this work. First, we define the robust DCR problem space in terms of communications failures, agent failures and observability of failed agents. Then we describe two new types of algorithm modifications and show where they and other algorithms fit into this problem space. Finally, we analyze these algorithms and discuss what future work is needed in this area. 1
Solving Distributed Constraint Optimization Problems Using Logic Programming
"... This paper explores the use of answer set programming (ASP) in solving distributed constraint optimization problems (DCOPs). It makes the following contributions: (i) It shows how one can formulate DCOPs as logic programs; (ii) It introduces ASPDPOP, the first DCOP algorithm that is based on logic ..."
Abstract
 Add to MetaCart
(Show Context)
This paper explores the use of answer set programming (ASP) in solving distributed constraint optimization problems (DCOPs). It makes the following contributions: (i) It shows how one can formulate DCOPs as logic programs; (ii) It introduces ASPDPOP, the first DCOP algorithm that is based on logic programming; (iii) It experimentally shows that ASPDPOP can be up to two orders of magnitude faster than DPOP (its imperativeprogramming counterpart) as well as solve some problems that DPOP fails to solve due to memory limitations; and (iv) It demonstrates the applicability of ASP in the wide array of multiagent problems currently modeled as DCOPs.
Decentralized MultiAgent Reinforcement Learning in AverageReward Dynamic DCOPs
"... Abstract. Researchers have introduced the Dynamic Distributed Constraint Optimization Problem (Dynamic DCOP) formulation to model dynamically changing multiagent coordination problems, where a dynamic DCOP is a sequence of (static canonical) DCOPs, each partially different from the DCOP preceding ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Researchers have introduced the Dynamic Distributed Constraint Optimization Problem (Dynamic DCOP) formulation to model dynamically changing multiagent coordination problems, where a dynamic DCOP is a sequence of (static canonical) DCOPs, each partially different from the DCOP preceding it. Existing work typically assumes that the problem in each time step is decoupled from the problems in other time steps, which might not hold in some applications. Therefore, in this paper, we make the following contributions: (i) We introduce a new model, called Markovian Dynamic DCOPs (MDDCOPs), where the DCOP in the next time step is a function of the value assignments in the current time step; (ii) We introduce two distributed reinforcement learning algorithms, the Distributed RVI Qlearning algorithm and the Distributed Rlearning algorithm, that balance exploration and exploitation to solve MDDCOPs in an online manner; and (iii) We empirically evaluate them against an existing multiarm bandit DCOP algorithm on dynamic DCOPs. 1