Results 1  10
of
23
Mdpop: Faithful distributed implementation of efficient social choice problems
 In AAMAS’06  Autonomous Agents and Multiagent Systems
, 2006
"... In the efficient social choice problem, the goal is to assign values, subject to side constraints, to a set of variables to maximize the total utility across a population of agents, where each agent has private information about its utility function. In this paper we model the social choice problem ..."
Abstract

Cited by 41 (15 self)
 Add to MetaCart
In the efficient social choice problem, the goal is to assign values, subject to side constraints, to a set of variables to maximize the total utility across a population of agents, where each agent has private information about its utility function. In this paper we model the social choice problem as a distributed constraint optimization problem (DCOP), in which each agent can communicate with other agents that share an interest in one or more variables. Whereas existing DCOP algorithms can be easily manipulated by an agent, either by misreporting private information or deviating from the algorithm, we introduce MDPOP, the first DCOP algorithm that provides a faithful distributed implementation for efficient social choice. This provides a concrete example of how the methods of mechanism design can be unified with those of distributed optimization. Faithfulness ensures that no agent can benefit by unilaterally deviating from any aspect of the protocol, neither informationrevelation, computation, nor communication, and whatever the private information of other agents. We allow for payments by agents to a central bank, which is the only central authority that we require. To achieve faithfulness, we carefully integrate the VickreyClarkeGroves (VCG) mechanism with the DPOP algorithm, such that each agent is only asked to perform computation, report
BnBADOPT: An asynchronous branchandbound DCOP algorithm
 In Proceedings of AAMAS
, 2008
"... Abstract. Distributed constraint optimization problems (DCOPs) are a popular way of formulating and solving agentcoordination problems. It is often desirable to solve DCOPs optimally with memorybounded and asynchronous algorithms. We thus introduce BranchandBound ADOPT (BnBADOPT), a memoryboun ..."
Abstract

Cited by 34 (11 self)
 Add to MetaCart
Abstract. Distributed constraint optimization problems (DCOPs) are a popular way of formulating and solving agentcoordination problems. It is often desirable to solve DCOPs optimally with memorybounded and asynchronous algorithms. We thus introduce BranchandBound ADOPT (BnBADOPT), a memorybounded asynchronous DCOP algorithm that uses the message passing and communication framework of ADOPT, a well known memorybounded asynchronous DCOP algorithm, but changes the search strategy of ADOPT from bestfirst search to depthfirst branchandbound search. Our experimental results show that BnBADOPT is up to one order of magnitude faster than ADOPT on a variety of large DCOPs and faster than NCBB, a memorybounded synchronous DCOP algorithm, on most of these DCOPs. 1
Asynchronous Algorithms for Approximate Distributed Constraint Optimization with Quality Bounds
"... Distributed Constraint Optimization (DCOP) is a popular framework for cooperative multiagent decision making. DCOP is NPhard, so an important line of work focuses on developing fast incomplete solution algorithms for largescale applications. One of the few incomplete algorithms to provide bounds o ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Distributed Constraint Optimization (DCOP) is a popular framework for cooperative multiagent decision making. DCOP is NPhard, so an important line of work focuses on developing fast incomplete solution algorithms for largescale applications. One of the few incomplete algorithms to provide bounds on solution quality is ksize optimality, which defines a local optimality criterion based on the size of the group of deviating agents. Unfortunately, the lack of a generalpurpose algorithm and the commitment to forming groups based solely on group size has limited the use of ksize optimality. This paper introduces tdistance optimality which departs from ksize optimality by using graph distance as an alternative criteria for selecting groups of deviating agents. This throws open a new research direction into the tradeoffs between different group selection
Anytime local search for distributed constraint optimization
 In TwentyThird AAAI Conference on Artificial Intelligence
, 2008
"... Most former studies of Distributed Constraint Optimization Problems (DisCOPs) search considered only complete search algorithms, which are practical only for relatively small problems. Distributed local search algorithms can be used for solving DisCOPs. However, because of the differences between th ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Most former studies of Distributed Constraint Optimization Problems (DisCOPs) search considered only complete search algorithms, which are practical only for relatively small problems. Distributed local search algorithms can be used for solving DisCOPs. However, because of the differences between the global evaluation of a system’s state and the private evaluation of states by agents, agents are unaware of the global best state which is explored by the algorithm. Previous attempts to use local search algorithms for solving DisCOPs reported the state held by the system at the termination of the algorithm, which was not necessarily the best state explored. A general framework for implementing distributed local search algorithms for DisCOPs is proposed. The proposed framework makes use of a BF Stree in order to accumulate the costs of the system’s state in its different steps and to propagate the detection of a new best step when it is found. The resulting framework enhances local search algorithms for DisCOPs with the anytime property. The proposed framework does not require additional network load. Agents are required to hold a small (linear) additional space (beside the requirements of the algorithm in use). The proposed framework preserves privacy at a higher level than complete DisCOP algorithms which make use of a pseudotree (ADOP T, DP OP).
Distributed Constraint Optimization with Structured Resource Constraints
"... Distributed constraint optimization (DCOP) provides a framework for coordinated decision making by a team of agents. Often, during the decision making, capacity constraints on agents ’ resource consumption must be taken into account. To address such scenarios, an extension of DCOP Resource Constrai ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Distributed constraint optimization (DCOP) provides a framework for coordinated decision making by a team of agents. Often, during the decision making, capacity constraints on agents ’ resource consumption must be taken into account. To address such scenarios, an extension of DCOP Resource Constrained DCOP has been proposed. However, certain type of resources have an additional structure associated with them and exploiting it can result in more efficient algorithms than possible with a general framework. An example of these are distribution networks, where the flow of a commodity from sources to sinks is limited by the flow capacity of edges. We present a new model of structured resource constraints that exploits the acyclicity and the flow conservation property of distribution networks. We show how this model can be used in efficient algorithms for finding the optimal flow configuration in distribution networks, an essential problem in managing power distribution networks. Experiments demonstrate the efficiency and scalability of our approach on publicly available benchmarks and compare favorably against a specialized solver for this task. Our results extend significantly the effectiveness of distributed constraint optimization for practical multiagent settings.
B.: E[DPOP]: Distributed constraint optimization under stochastic uncertainty using collaborative sampling
, 2009
"... Abstract. Many applications that require distributed optimization also include uncertainty about the problem and the optimization criteria themselves. However, current approaches to distributed optimization assume that the problem is entirely known before optimization is carried out, while approache ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Abstract. Many applications that require distributed optimization also include uncertainty about the problem and the optimization criteria themselves. However, current approaches to distributed optimization assume that the problem is entirely known before optimization is carried out, while approaches to optimization with uncertainty have been investigated for centralized algorithms. This paper introduces the framework of Distributed Constraint Optimization under Stochastic Uncertainty (StochDCOP), in which random variables with known probability distributions are used to model sources of uncertainty. Our main novel contribution is a distributed procedure called collaborative sampling, which we use to produce several new versions of the DPOP algorithm for StochDCOPs. We evaluate the benefits of collaborative sampling over the simple approach in which each agent samples the random variables independently. We also show that collaborative sampling can be used to implement a new, distributed version of the consensus algorithm, which is a wellknown algorithm for centralized, online stochastic optimization in which the solution chosen is the one that is optimal in most cases, rather than the one that maximizes the expected utility. 1
Caching schemes for DCOP search algorithms
 In Proceedings of AAMAS
, 2009
"... Distributed Constraint Optimization (DCOP) is useful for solving agentcoordination problems. Anyspace DCOP search algorithms require only a small amount of memory but can be sped up by caching information. However, their current caching schemes do not exploit the cached information when deciding w ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Distributed Constraint Optimization (DCOP) is useful for solving agentcoordination problems. Anyspace DCOP search algorithms require only a small amount of memory but can be sped up by caching information. However, their current caching schemes do not exploit the cached information when deciding which information to preempt from the cache when a new piece of information needs to be cached. Our contributions are threefold: (1) We frame the problem as an optimization problem. (2) We introduce three new caching schemes (MaxPriority, MaxEffort and MaxUtility) that exploit the cached information in a DCOPspecific way. (3) We evaluate how the resulting speed up depends on the search strategy of the DCOP search algorithm. Our experimental results show that, on all tested DCOP problem classes, our MaxEffort and MaxUtility schemes speed up ADOPT (which uses bestfirst search) more than the other tested caching schemes, while our MaxPriority scheme speeds up BnBADOPT (which uses depthfirst branchandbound search) at least as much as the other tested caching schemes.
Measuring Distributed Constraint Optimization algorithms ⋆
"... Abstract. Complete algorithms for solving DisCOPs have been a major focus of research in the DCR community in the last few years. The properties of these algorithms belong to very different categories. Algorithms differ by their degree of asynchronicity, by the method of their combinatorial part, an ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. Complete algorithms for solving DisCOPs have been a major focus of research in the DCR community in the last few years. The properties of these algorithms belong to very different categories. Algorithms differ by their degree of asynchronicity, by the method of their combinatorial part, and by dividing the problem into sub parts. The wide variety of different families of algorithms makes it hard to find a uniform method for measuring and comparing their performance. The present paper proposes a uniform performance scale which is applicable for all DisCOP algorithms. The proposed performance measure enables an evaluation of the different DisCOP algorithms on a uniform scale, which was not published before. Preliminary results are presented and display the hierarchy of DisCOP search algorithms according to their performance on random DisCOPs. 1
Applying interchangeability to complex local problems in distributed constraint reasoning
 in Proc. Workshop on Distributed Constraint Reasoning, AAMAS
, 2006
"... Abstract. Many algorithms for distributed constraint problems assume each agent has a single variable. For problems with multiple variables per agent, one standard approach is to transform each agent’s local problem by defining a single new variable whose domain is the set of all local solutions, an ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Many algorithms for distributed constraint problems assume each agent has a single variable. For problems with multiple variables per agent, one standard approach is to transform each agent’s local problem by defining a single new variable whose domain is the set of all local solutions, and reformulating the interagent constraints accordingly. We propose two general improvements to this method intended to (i) reduce problem size by removing interchangeable and dominated values from the new domains, and (ii) speed up search by identifying values that are interchangeable with respect to interagent constraints. 1
A comparison of approaches to handling complex local problems in DCOP
, 2006
"... Many distributed constraint optimisation algorithms require each agent to have a single variable. For agents with multiple variables, there are two standard approaches: decomposition – for each variable in each local problem, create a unique agent to manage it; and compilation – compile the local p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Many distributed constraint optimisation algorithms require each agent to have a single variable. For agents with multiple variables, there are two standard approaches: decomposition – for each variable in each local problem, create a unique agent to manage it; and compilation – compile the local problem down to a new variable whose domain is the set of all local solutions. We compare these two approaches with each other and with a modified compilation approach that uses dominance and interchangeabilities to reduce problem size and speed up search. Our preliminary results show: (i) the basic compilation is almost never competitive; (ii) the modified compilation gives significant improvements over the other methods as the size and complexity of each agent’s internal problem grows, as long as the number of interagent constraints and the domain size of the variables remains small; (iii) the decomposition approach is more appropriate to use as the number of interagent constraints and the domain size of the variables increase, as long as the overall problem size is small.