Results 1 
6 of
6
Coordination of First Responders Under Communication and Resource Constraints (Short Paper)
"... This paper discusses the application of distributed constraint optimization to coordination in disaster management situations under suboptimal network conditions. It presents an example system for the problem of shelter assignment and outlines some of the challenges and future research directions t ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
This paper discusses the application of distributed constraint optimization to coordination in disaster management situations under suboptimal network conditions. It presents an example system for the problem of shelter assignment and outlines some of the challenges and future research directions that must be addressed before realworld deployment of distributed constraint optimization becomes a reality.
On the ratio of communications to computation in DCR efficiency metrics
"... We propose a way to define the most expensive operation to be used in evaluations of complexity and efficiency for simulated distributed constraint reasoning (DCR) algorithms. We also report experiments showing that the cost associated with a constraint check, even within the same algorithm, depend ..."
Abstract
 Add to MetaCart
We propose a way to define the most expensive operation to be used in evaluations of complexity and efficiency for simulated distributed constraint reasoning (DCR) algorithms. We also report experiments showing that the cost associated with a constraint check, even within the same algorithm, depends on the problem size. The DCR research has seen heated debate regarding the correct way to evaluate efficiency of simulated algorithms. DCR has to accommodate two established practices coming from very different fields: distributed computing and constraint reasoning. The efficiency of distributed algorithms is typically evaluated in terms of the network load and overall computation time, while many (synchronous) algorithms are evaluated in terms of the number of rounds that they require. Constraint reasoning research evaluates efficiency in terms of constraint checks and visited searchtree nodes. We argue that an algorithm has to be evaluated from the point of view of specific operating points, namely of possible or targeted application scenarios. We then show how to report efficiency for a given operating point based on simulation. Additionally, new experiments we report here show the fact that the cost of a constraint check varies with the size of the problem, and we discuss the implications of this phenomenon.
Constant cost of the computationunit in efficiency graphs
, 2008
"... This article identifies and corrects a commonly held misconception about the scalability evaluation of distributed constraint reasoning algorithms. We show how to ensure a constant cost for the computationunit in graphs depicting the number of (sequential) computationunits at different problem siz ..."
Abstract
 Add to MetaCart
This article identifies and corrects a commonly held misconception about the scalability evaluation of distributed constraint reasoning algorithms. We show how to ensure a constant cost for the computationunit in graphs depicting the number of (sequential) computationunits at different problem sizes. This is needed for a meaningful evaluation of scalability and efficiency, specially for distributed computations where it is an assumption of the measurement. We report empirical evaluation with ADOPT revealing that the computation cost associated with a constraint check (commonly used – and assumed constant – in ENCCCs evaluations) actually varies with the problem size, by orders of magnitude. This flaw makes it difficult to interpret such skewed graphs. We searched for methods to fix this problem and report a solution. We started from the hypothesis that the variation of the cost associated with a constraintcheck is due to the fact that the most inner cycles of some common constraint solvers like ADOPT do not consist of constraint checks, but of processing search contexts (i.e., other data structures). We therefore propose computationunits based on a basket of weighted constraintchecks and context processing operations. Experimental evaluation shows that we obtain a constant cost of the computationunit, proving the correctness of our hypothesis and offering a better methodology for efficiency and scalability evaluation.
Scalability in Constraints: Are you comparing apples and oranges?
, 2008
"... Here we show how to ensure a constant cost for the computationunit in graphs depicting the number of (sequential) computationunits at different problem sizes. This is needed for a meaningful evaluation of scalability and efficiency, specially for distributed computations where it is an assumption ..."
Abstract
 Add to MetaCart
Here we show how to ensure a constant cost for the computationunit in graphs depicting the number of (sequential) computationunits at different problem sizes. This is needed for a meaningful evaluation of scalability and efficiency, specially for distributed computations where it is an assumption of the measurement. We report empirical evaluation with ADOPT revealing that the computation cost associated with a constraint check (commonly used – and assumed constant – in ENCCCs evaluations) actually varies with the problem size, by orders of magnitude. This flaw makes it difficult to interpret such skewed graphs. We searched for methods to fix this problem and report a solution. We started from the hypothesis that the variation of the cost associated with a constraintcheck is due to the fact that the most inner cycles of some common constraint solvers like ADOPT do not consist of constraint checks, but of processing search contexts (i.e., other data structures). We therefore propose computationunits based on a basket of weighted constraintchecks and context processing operations. Experimental evaluation shows that we obtain a constant cost of the computationunit, proving the correctness of our hypothesis and offering a better methodology for efficiency and scalability evaluation.
Drexel University,
"... Here we show how to ensure a constant cost for the computationunit in graphs depicting the number of (sequential) computationunits at different problem sizes. This is needed for a meaningful evaluation of scalability and efficiency, specially for distributed computations where it is an assumption ..."
Abstract
 Add to MetaCart
(Show Context)
Here we show how to ensure a constant cost for the computationunit in graphs depicting the number of (sequential) computationunits at different problem sizes. This is needed for a meaningful evaluation of scalability and efficiency, specially for distributed computations where it is an assumption of the measurement. We report empirical evaluation with ADOPT revealing that the computation cost associated with a constraint check (commonly used – and assumed constant – in ENCCCs evaluations) actually varies with the problem size, by orders of magnitude. This flaw makes it difficult to interpret such skewed graphs. We searched for methods to fix this problem and report a solution. We started from the hypothesis that the variation of the cost associated with a constraintcheck is due to the fact that the most inner cycles of some common constraint solvers like ADOPT do not consist of constraint checks, but of processing search contexts (i.e., other data structures). We therefore propose computationunits based on a basket of weighted constraintchecks and context processing operations. Experimental evaluation shows that we obtain a constant cost of the computationunit, proving the correctness of our hypothesis and offering a better methodology for efficiency and scalability evaluation.