Results 1  10
of
64
Methods for Task Allocation Via Agent Coalition Formation
, 1998
"... Task execution in multiagent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is nece ..."
Abstract

Cited by 272 (21 self)
 Add to MetaCart
Task execution in multiagent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximat...
Static Scheduling Algorithms for Allocating Directed Task Graphs to Multiprocessors
, 1999
"... Devices]: Modes of ComputationParallelism and concurrency General Terms: Algorithms, Design, Performance, Theory Additional Key Words and Phrases: Automatic parallelization, DAG, multiprocessors, parallel processing, software tools, static scheduling, task graphs This research was supported ..."
Abstract

Cited by 208 (4 self)
 Add to MetaCart
Devices]: Modes of ComputationParallelism and concurrency General Terms: Algorithms, Design, Performance, Theory Additional Key Words and Phrases: Automatic parallelization, DAG, multiprocessors, parallel processing, software tools, static scheduling, task graphs This research was supported by the Hong Kong Research Grants Council under contract numbers HKUST 734/96E, HKUST 6076/97E, and HKU 7124/99E. Authors' addresses: Y.K. Kwok, Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong; email: ykwok@eee.hku.hk; I. Ahmad, Department of Computer Science, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. Permission to make digital / hard copy of part or all of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and / or a fee. 2000 ACM 03600300/99/12000406 $5.00 ACM Computing Surveys, Vol. 31, No. 4, December 1999 1.
InstructionLevel Parallel Processing: History, Overview and Perspective
, 1992
"... Instructionlevel Parallelism CILP) is a family of processor and compiler design techniques that speed up execution by causing individual machine operations to execute in parallel. Although ILP has appeared in the highest performance uniprocessors for the past 30 years, the 1980s saw it become a muc ..."
Abstract

Cited by 171 (0 self)
 Add to MetaCart
Instructionlevel Parallelism CILP) is a family of processor and compiler design techniques that speed up execution by causing individual machine operations to execute in parallel. Although ILP has appeared in the highest performance uniprocessors for the past 30 years, the 1980s saw it become a much more significant force in computer design. Several systems were built, and sold commercially, which pushed ILP far beyond where it had been before, both in terms of the amount of ILP offered and in the central role ILP played in the design of the system. By the end of the decade, advanced microprocessor design at all major CPU manufacturers had incorporated ILP, and new techniques for ILP have become a popular topic at academic conferences. This article provides an overview and historical perspective of the field of ILP and its development over the past three decades.
Implications of Classical Scheduling Results For RealTime Systems
 IEEE COMPUTER
, 1995
"... Important classical scheduling theory results for realtime computing are identified. Implications of these results from the perspective of a realtime systems designer are discussed. Uniprocessor and multiprocessor results are addressed as well as important issues such as future release times, pre ..."
Abstract

Cited by 121 (1 self)
 Add to MetaCart
Important classical scheduling theory results for realtime computing are identified. Implications of these results from the perspective of a realtime systems designer are discussed. Uniprocessor and multiprocessor results are addressed as well as important issues such as future release times, precedence constraints, shared resources, task value, overloads, static versus dynamic scheduling, preemption versus nonpreemption, multiprocessing anomalies, and metrics. Examples of what scheduling algorithms are used in actual applications are given.
Benchmarking and Comparison of the Task Graph Scheduling Algorithms
, 1999
"... The problem of scheduling a parallel program represented by a weighted directed acyclic graph (DAG) to a set of homogeneous processors for minimizing the completion time of the program has been extensively studied. The NPcompleteness of the problem has stimulated researchers to propose a myriad of ..."
Abstract

Cited by 80 (2 self)
 Add to MetaCart
The problem of scheduling a parallel program represented by a weighted directed acyclic graph (DAG) to a set of homogeneous processors for minimizing the completion time of the program has been extensively studied. The NPcompleteness of the problem has stimulated researchers to propose a myriad of heuristic algorithms. While most of these algorithms are reported to be efficient, it is not clear how they compare against each other. A meaningful performance evaluation and comparison of these algorithms is a complex task and it must take into account a number of issues. First, most scheduling algorithms are based upon diverse assumptions, making the performance comparison rather purposeless. Second, there does not exist a standard set of benchmarks to examine these algorithms. Third, most algorithms are evaluated using small problem sizes, and, therefore, their scalability is unknown. In this paper, we first provide a taxonomy for classifying various algorithms into distinct categories a...
Twoprocessor scheduling with starttimes and deadlines
 SIAM Journal on Computing
, 1977
"... Abstract. Given a set 3 = {T1, T2, , T,} of tasks, each T/having execution time 1, an integer starttime si>0 and adeadlinedi> 0, alongwithprecedence constraintsamongthe tasks,weexamine the problem of determining whether there exists a schedule on two identical processors that executes each task i ..."
Abstract

Cited by 65 (0 self)
 Add to MetaCart
Abstract. Given a set 3 = {T1, T2, , T,} of tasks, each T/having execution time 1, an integer starttime si>0 and adeadlinedi> 0, alongwithprecedence constraintsamongthe tasks,weexamine the problem of determining whether there exists a schedule on two identical processors that executes each task in the time intervalbetween its starttimeand deadline.We present an O(n3) algorithm that constructs such a schedule whenever one exists. The algorithm may also be used in a binary search mode to find the shortest such schedule or to find a schedule that minimizesmaximum "tardiness".A number of natural extensions of this problem are seen to be NPcomplete and hence probably intractable. Key words, multiprocessing systems, scheduling algorithms, NPcomplete problems 1. Introduction. Since publication of the book Theory ofScheduling [4] by Conway, Maxwell, andMiller in 1967, considerableprogresshasbeenmade inthe mathematical analysis of abstract multiprocessing systems. One combinatorial model which is central to much of this work consists of a numberm of identical, independent processors, a finite set {T1, T2, Tn} of tasks to be executed,
List Scheduling with and without Communication Delays
 Parallel Computing
, 1993
"... Empirical results have shown that the classical critical path (CP) list scheduling heuristic for task graphs is a fast and practical heuristic when communication cost is zero. In the first part of this paper we study the theoretical properties of the CP heuristic that lead to near optimum performanc ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
Empirical results have shown that the classical critical path (CP) list scheduling heuristic for task graphs is a fast and practical heuristic when communication cost is zero. In the first part of this paper we study the theoretical properties of the CP heuristic that lead to near optimum performance in practice. In the second part we extend the CP analysis to the problem of ordering the task execution when the processor assignment is given and communication cost is nonzero. We propose two new list scheduling heuristics, the RCP and RCP 3 that use critical path information and ready list priority scheduling. We show that the performance properties for RCP and RCP 3 , when communication is nonzero, are similar to CP when communication is zero. Finally, we present an extensive experimental study and optimality analysis of the heuristics which verifies our theoretical results. 1 Introduction The processor scheduling problem is of considerable importance in parallel processing. Given a...
Fairness Measures for Resource Allocation
 Proceedings of 41st IEEE Symposium on Foundations of Computer Science
, 2000
"... In many optimization problems, one seeks to allocate a limited set of resources to a set of individuals with demands. Thus, such allocations can naturally be viewed as vectors, with one coordinate representing each individual. Motivated by work in network routing and bandwidth assignment, we conside ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
In many optimization problems, one seeks to allocate a limited set of resources to a set of individuals with demands. Thus, such allocations can naturally be viewed as vectors, with one coordinate representing each individual. Motivated by work in network routing and bandwidth assignment, we consider the problem of producing solutions that simultaneously approximate all feasible allocations in a coordinatewise sense. This is a very strong type of "global" approximation guarantee, and we explore its consequences in a range of discrete optimization problems, including facility location, scheduling, and bandwidth assignment in networks. A fundamental issue  one not encountered in the traditional design of approximation algorithms  is that good approximations in this global sense need not exist for every problem instance; there is no a priori reason why there should be an allocation that simultaneously approximates all others. As a result, the existential questions concerning such g...
Formation of Overlapping Coalitions for PrecedenceOrdered TaskExecution Among Autonomous Agents
, 1996
"... Goalsatisfaction in multiagent environments via coalition formation may be beneficial in cases where agents cannot perform goals by themselves or they do so inefficiently. Agent coalition formation typically requires that each agent must be a member of only one coalition. This may lead to a waste ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Goalsatisfaction in multiagent environments via coalition formation may be beneficial in cases where agents cannot perform goals by themselves or they do so inefficiently. Agent coalition formation typically requires that each agent must be a member of only one coalition. This may lead to a waste of resources and capabilities. Therefore, we present algorithms that lead agents to the formation of overlapping coalitions, where each coalition is assigned a goal. The algorithms we present are appropriate for agents working as a Distributed Problem Solving system in nonsuperadditive environments. They are anytime distributed algorithms with a low computational complexity and low ratiobound. Content area: Cooperation and coordination Kraus is also affiliated with the Institute for Advanced Computer Studies, University of Maryland. This material is based upon work supported in part by the NSF under Grant No. IRI9423967 and the Israeli Science Ministry grant No. 6288. i 1 Introducti...
Analysis, Evaluation, and Comparison of Algorithms for Scheduling Task Graphs on Parallel Processors
 In Proceedings of the Second International Symposium on Parallel Architectures, Algorithms, and Networks
, 1996
"... Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an edgeweighted directed acyclic graph (DAG), also called a task graph or macrodataflow graph, to a set of homogeneous processors, with the objective of minimizing the completion time. We analyze 21 such ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract 1 In this paper, we survey algorithms that allocate a parallel program represented by an edgeweighted directed acyclic graph (DAG), also called a task graph or macrodataflow graph, to a set of homogeneous processors, with the objective of minimizing the completion time. We analyze 21 such algorithms and classify them into four groups. The first group includes algorithms that schedule the DAG to a bounded number of processors directly. These algorithms are called the bounded number of processors (BNP) scheduling algorithms. The algorithms in the second group schedule the DAG to an unbounded number of clusters and are called the unbounded number of clusters (UNC) scheduling algorithms. The algorithms in the third group schedule the DAG using task duplication and are called the task duplication based (TDB) scheduling algorithms. The algorithms in the fourth group perform allocation and mapping on arbitrary processor network topologies. These algorithms are called the arbitrary processor network (APN) scheduling algorithms. The design philosophies and principles behind these algorithms are discussed, and the performance of all of the algorithms is evaluated and compared against each other on a unified basis by using various scheduling parameters.