Results 1  10
of
17
Distributed Gibbs: A MemoryBounded SamplingBased DCOP Algorithm
"... Researchers have used distributed constraint optimization problems (DCOPs) to model various multiagent coordination and resource allocation problems. Very recently, Ottens et al. proposed a promising new approach to solve DCOPs that is based on confidence bounds via their Distributed UCT (DUCT) sam ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
(Show Context)
Researchers have used distributed constraint optimization problems (DCOPs) to model various multiagent coordination and resource allocation problems. Very recently, Ottens et al. proposed a promising new approach to solve DCOPs that is based on confidence bounds via their Distributed UCT (DUCT) samplingbased algorithm. Unfortunately, its memory requirement per agent is exponential in the number of agents in the problem, which prohibits it from scaling up to large problems. Thus, in this paper, we introduce a new samplingbased DCOP algorithm called Distributed Gibbs, whose memory requirements per agent is linear in the number of agents in the problem. Additionally, we show empirically that our algorithm is able to find solutions that are better than DUCT; and computationally, our algorithm runs faster than DUCT as well as solve some large problems that DUCT failed to solve due to memory limitations.
Improving DPOP with branch consistency for solving distributed constraint optimization problems
 In CP
, 2014
"... Abstract. The DCOP model has gained momentum in recent years thanks to its ability to capture problems that are naturally distributed and cannot be realistically addressed in a centralized manner. Dynamic programming based techniques have been recognized to be among the most effective techniques f ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The DCOP model has gained momentum in recent years thanks to its ability to capture problems that are naturally distributed and cannot be realistically addressed in a centralized manner. Dynamic programming based techniques have been recognized to be among the most effective techniques for building complete DCOP solvers (e.g., DPOP). Unfortunately, they also suffer from a widely recognized drawback: their messages are exponential in size. Another limitation is that most current DCOP algorithms do not actively exploit hard constraints, which are common in many real problems. This paper addresses these two limitations by introducing an algorithm, called BrCDPOP, that exploits arc consistency and a form of consistency that applies to paths in pseudotrees to reduce the size of the messages. Experimental results shows that BrCDPOP uses messages that are up to one order of magnitude smaller than DPOP, and that it can scale up well, being able to solve problems that its counterpart can not. 1
Decentralized multiagent reinforcement learning in averagereward dynamic DCOPs (theoretical proofs
, 2014
"... Researchers have introduced the Dynamic Distributed Constraint Optimization Problem (Dynamic DCOP) formulation to model dynamically changing multiagent coordination problems, where a dynamic DCOP is a sequence of (static canonical) DCOPs, each partially different from the DCOP preceding it. Exist ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Researchers have introduced the Dynamic Distributed Constraint Optimization Problem (Dynamic DCOP) formulation to model dynamically changing multiagent coordination problems, where a dynamic DCOP is a sequence of (static canonical) DCOPs, each partially different from the DCOP preceding it. Existing work typically assumes that the problem in each time step is decoupled from the problems in other time steps, which might not hold in some applications. Therefore, in this paper, we make the following contributions: (i) We introduce a new model, called Markovian Dynamic DCOPs (MDDCOPs), where the DCOP in the next time step is a function of the value assignments in the current time step; (ii) We introduce two distributed reinforcement learning algorithms, the Distributed RVI Qlearning algorithm and the Distributed Rlearning algorithm, that balance exploration and exploitation to solve MDDCOPs in an online manner; and (iii) We empirically evaluate them against an existing multiarm bandit DCOP algorithm on dynamic DCOPs.
Modeling microgrid islanding problems as dcops
 in North American Power Symposium (NAPS
, 2013
"... Abstract—In this paper, we formulate the microgrid islanding problem as distributed constraint optimization problem (DCOP) and investigate the feasibility of solving it using offtheshelf DCOP algorithms. This paper puts forward the potential of distributed constraint reasoning paradigm as a candid ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we formulate the microgrid islanding problem as distributed constraint optimization problem (DCOP) and investigate the feasibility of solving it using offtheshelf DCOP algorithms. This paper puts forward the potential of distributed constraint reasoning paradigm as a candidate for solving common microgrids problems.
MultiVariable Agents Decomposition for DCOPs to Exploit MultiLevel Parallelism∗
"... Current DCOP algorithms suffer from a major limiting assumption—each agent can handle only a single variable of the problem—which limits their scalability. This paper proposes a novel MultiVariable Agent (MVA) DCOP decomposition, which: (i) Exploits colocality of an agent’s variables, allowing us ..."
Abstract
 Add to MetaCart
(Show Context)
Current DCOP algorithms suffer from a major limiting assumption—each agent can handle only a single variable of the problem—which limits their scalability. This paper proposes a novel MultiVariable Agent (MVA) DCOP decomposition, which: (i) Exploits colocality of an agent’s variables, allowing us to adopt efficient centralized techniques; (ii) Enables the use of hierarchical parallel models, such us those based on GPGPUs; and (iii) Empirically reduces the amount of communication required in several classes of DCOP algorithms. Experimental results show that our MVA decomposition outperforms nondecomposed DCOP algorithms, in terms of network load and scalability.
Distributed Constraint Optimization Problems (DCOPs)
"... Researchers have recently introduced a promising new class of Distributed Constraint Optimization Problem (DCOP) algorithms that is based on sampling. This paradigm is very amenable to parallelization since sampling algorithms require a lot of samples to ensure convergence, and the sampling proce ..."
Abstract
 Add to MetaCart
Researchers have recently introduced a promising new class of Distributed Constraint Optimization Problem (DCOP) algorithms that is based on sampling. This paradigm is very amenable to parallelization since sampling algorithms require a lot of samples to ensure convergence, and the sampling process can be designed to be executed in parallel. This paper presents GPUbased DGibbs (GDGibbs), which extends the Distributed Gibbs (DGibbs) sampling algorithm and harnesses the power of parallel computation of GPUs to solve DCOPs. Experimental results show that GDGibbs is faster than several other benchmark algorithms on a distributed meeting scheduling problem.
Under consideration for publication in Theory and Practice of Logic Programming 1 Logic and Constraint Logic Programming for Distributed Constraint Optimization
, 2003
"... The field of Distributed Constraint Optimization Problems (DCOPs) has gained momentum, thanks to its suitability in capturing complex problems (e.g., multiagent coordination and resource allocation problems) that are naturally distributed and cannot be realistically addressed in a centralized mann ..."
Abstract
 Add to MetaCart
(Show Context)
The field of Distributed Constraint Optimization Problems (DCOPs) has gained momentum, thanks to its suitability in capturing complex problems (e.g., multiagent coordination and resource allocation problems) that are naturally distributed and cannot be realistically addressed in a centralized manner. The state of the art in solving DCOPs relies on the use of adhoc infrastructures and adhoc constraint solving procedures. This paper investigates an infrastructure for solving DCOPs that is completely built on logic programming technologies. In particular, the paper explores the use of a general constraint solver (a constraint logic programming system in this context) to handle the agentlevel constraint solving. The preliminary experiments show that logic programming provides benefits over a stateoftheart DCOP system, in terms of performance and scalability, opening the doors to the use of more advanced technology (e.g., search strategies and complex constraints) for solving DCOPs.
Incremental DCOP Search Algorithms for Solving Dynamic DCOP Problems
"... Abstract—Distributed constraint optimization (DCOP) problems are wellsuited for modeling multiagent coordination problems. However, it only models static problems, which do not change over time. Consequently, researchers have introduced the Dynamic DCOP (DDCOP) model to model dynamic problems. I ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Distributed constraint optimization (DCOP) problems are wellsuited for modeling multiagent coordination problems. However, it only models static problems, which do not change over time. Consequently, researchers have introduced the Dynamic DCOP (DDCOP) model to model dynamic problems. In this paper, we make two key contributions: (a) a procedure to reason with the incremental changes in DDCOPs and (b) an incremental pseudotree construction algorithm that can be used by DCOP algorithms such as anyspace ADOPT and anyspace BnBADOPT to solve DDCOPs. Due to the incremental reasoning employed, our experimental results show that anyspace ADOPT and anyspace BnBADOPT are up to 42 % and 38 % faster, respectively, with the incremental procedure and the incremental pseudotree reconstruction algorithm than without them. I.
Large Neighborhood Search with Quality Guarantees for Distributed Constraint Optimization Problems Ferdinando Fioretto1,2, Federico Campeotto2,
"... Abstract. The field of Distributed Constraint Optimization has gained momentum in recent years, thanks to its ability to address various applications related to multiagent cooperation. Nevertheless, solving Distributed Constraint Optimization Problems (DCOPs) optimally is NPhard. Therefore, in l ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The field of Distributed Constraint Optimization has gained momentum in recent years, thanks to its ability to address various applications related to multiagent cooperation. Nevertheless, solving Distributed Constraint Optimization Problems (DCOPs) optimally is NPhard. Therefore, in largescale applications, incomplete DCOP algorithms are desirable. Current incomplete search techniques have subsets of the following limitations: (a) they find local minima without quality guarantees; (b) they provide loose quality assessment; or (c) they cannot exploit certain problem structures such as hard constraints. Therefore, capitalizing on strategies from the centralized constraint reasoning community, we propose to adapt the Large Neighborhood Search (LNS) strategy to solve DCOPs, resulting in the general Distributed LNS (DLNS) framework. The characteristics of this framework are as follows: (i) it is anytime; (ii) it provides quality guarantees by refining online upper and lower bounds on its solution quality; and (iii) it can learn online the best neighborhood to explore. Experimental results show that DLNS outperforms other incomplete DCOP algorithms for both random and scalefree network instances. 1
Exploiting GPUs in Solving (Distributed) Constraint Optimization Problems with Dynamic Programming?
"... Abstract. This paper proposes the design and implementation of a dynamic programming based algorithm for (distributed) constraint optimization, which exploits modern massively parallel architectures, such as those found in modern Graphical Processing Units (GPUs). The paper studies the proposed al ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper proposes the design and implementation of a dynamic programming based algorithm for (distributed) constraint optimization, which exploits modern massively parallel architectures, such as those found in modern Graphical Processing Units (GPUs). The paper studies the proposed algorithm in both centralized and distributed optimization contexts. The experimental analysis, performed on unstructured and structured graphs, shows the advantages of employing GPUs, resulting in enhanced performances and scalability. 1