Results 1  10
of
49
Metrics and benchmarking for parallel job scheduling
 In Job Scheduling Strategies for Parallel Processing
, 1998
"... Abstract. The evaluation of parallel job schedulers hinges on two things: the use of appropriate metrics, and the use of appropriate workloads on which the scheduler can operate. We argue that the focus should be on online open systems, and propose that a standard workload should be used as a bench ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
(Show Context)
Abstract. The evaluation of parallel job schedulers hinges on two things: the use of appropriate metrics, and the use of appropriate workloads on which the scheduler can operate. We argue that the focus should be on online open systems, and propose that a standard workload should be used as a benchmark for schedulers. This benchmark will specify distributions of parallelism and runtime, as found by analyzing accounting traces, and also internal structures that create different speedup and synchronization characteristics. As for metrics, we present some problems with slowdown and bounded slowdown that have been proposed recently. 1
Approximation Algorithms for Scheduling Malleable Tasks Under Precedence Constraints
"... This work presents approximation algorithms for scheduling the tasks of a parallel application that are subject to precedence constraints. The considered tasks are malleable which means that they may be executed on a varying number of processors in parallel. ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
This work presents approximation algorithms for scheduling the tasks of a parallel application that are subject to precedence constraints. The considered tasks are malleable which means that they may be executed on a varying number of processors in parallel.
Using Moldability to Improve the Performance of Supercomputer Jobs
, 2001
"... Distributedmemory parallel supercomputers are an important platform for the execution of highperformance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production t ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
Distributedmemory parallel supercomputers are an important platform for the execution of highperformance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job.
Scheduling Independent Multiprocessor Tasks
, 1997
"... . We study the problem of scheduling a set of n independent multiprocessor tasks with prespecified processor allocations on a fixed number of processors. We propose a linear time algorithm that finds a schedule of minimum makespan in the preemptive model, and a linear time approximation algorithm th ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
. We study the problem of scheduling a set of n independent multiprocessor tasks with prespecified processor allocations on a fixed number of processors. We propose a linear time algorithm that finds a schedule of minimum makespan in the preemptive model, and a linear time approximation algorithm that finds a schedule of length within a factor of (1 + ffl) of optimal in the nonpreemptive model. 1 Introduction A scheduling problem is usually given by a set T of n tasks, with an associated partial order which captures data dependencies between tasks, and a set Pm of m target processors. The goal is to assign tasks to processors and time steps so as to minimize an optimality criterion, for instance the makespan, i.e. the maximum completion time Cmax of any task. Depending on the model, tasks can be preempted or not. In the nonpreemptive model, a task once started has to be processed (until completion) without interruption. In the preemptive model, each task can be at no cost interrup...
Scheduling Malleable Parallel Tasks: An Asymptotic Fully PolynomialTime Approximation Scheme
 Algorithmica
, 2004
"... A malleable parallel task is one whose execution time is a function of the number of (identical) processors allotted to it. We study the problem of scheduling a set of n independent malleable tasks on an arbitrary number m of parallel processors and propose an asymptotic fully polynomial time app ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
A malleable parallel task is one whose execution time is a function of the number of (identical) processors allotted to it. We study the problem of scheduling a set of n independent malleable tasks on an arbitrary number m of parallel processors and propose an asymptotic fully polynomial time approximation scheme. For any xed > 0, the algorithm computes a nonpreemptive schedule of length at most (1 + ) times the optimum (plus an additive term) and has running time polynomial in n; m and 1=.
On Preemptive Resource Constrained Scheduling: Polynomialtime Approximation Schemes
, 2002
"... We study resource constrained scheduling problems where the objective is to compute feasible preemptive schedules minimizing the makespan and using no more resources than what are available. ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
We study resource constrained scheduling problems where the objective is to compute feasible preemptive schedules minimizing the makespan and using no more resources than what are available.
Scheduling Parallel Tasks Approximation Algorithms
, 2003
"... Scheduling is a crucial problem in parallel and distributed processing. It consists in determining where and when the tasks of parallel programs will be executed. The design of parallel algorithms has to be reconsidered by the influence of new execution supports (namely, clusters of workstations, gr ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Scheduling is a crucial problem in parallel and distributed processing. It consists in determining where and when the tasks of parallel programs will be executed. The design of parallel algorithms has to be reconsidered by the influence of new execution supports (namely, clusters of workstations, grid computing and global computing) which are characterized by a larger number of heterogeneous processors, often organized by hierarchical subsystems. Parallel Tasks model (tasks that require more than one processor for their execution) has been introduced about 15 years ago as a promising alternative for scheduling parallel applications, especially in the case of slow communication media. The basic idea is to consider the application at a rough level of granularity (larger tasks in order to decrease the relative weight of communications). As the main difficulty for scheduling in actual systems comes from handling efficiently the communications, this new view of the problem allows to consider them implicitely, thus leading to more tractable problems. We kindly invite the reader to look at the chapter of Maciej Drozdowski (in this book) for a detailed presentation of various kinds of Parallel Tasks in a general context and the survey paper from Feitelson et al. [14] for a discussion in the field of parallel processing. Even if the basic problem of scheduling Parallel Tasks remains NPhard, some approximation algorithms can be designed. A lot of results have been derived recently for scheduling the different types of Parallel Tasks, namely, Rigid, Moldable or Malleable ones. We will distinguish Parallel Tasks inside a same application or between applications in a multiuser context. Various optimization criteria will be discussed. 1 This chapter aims to present several approximation algorithms for scheduling moldable and malleable tasks with a special emphasis on new execution supports.
A Lagrangian Heuristic for Satellite Range Scheduling with Resource Constraints
"... The task of scheduling communications between satellites and ground control stations is getting more and more critical since an increasing number of satellites must be controlled by a small set of stations. In such a congested scenario, the current practice, in which experts build handmade schedule ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
The task of scheduling communications between satellites and ground control stations is getting more and more critical since an increasing number of satellites must be controlled by a small set of stations. In such a congested scenario, the current practice, in which experts build handmade schedules, often leaves a large number of communication requests unserved. We report on our experience in the design of an optimizationbased support tool at the European Space Agency. We propose a tight timeindexed formulation of the problem able to include several complex technological constraints. A nonstandard Lagrangian heuristic is then devised which provides nearoptimal solutions of a set of largescale test problems arising in the forthcoming GALILEO constellation. The heuristic shows numerical stability and robustness adequate for practical implementation. The resulting tool is used by the Italian reference operator for GALILEO system management and is currently under testing at the European Space Agency. 1
Multiprocessor task scheduling in multistage hybrid flow shops: A genetic algorithm approach
 Journal of the Operational Research Society
"... This paper considers multiprocessor task scheduling in a multistage hybrid flowshop environment. The objective is to minimize the makespan, i.e. the completion time of all the tasks in the last stage. This problem is of practical interest in the textile and process industries. A genetic algorithm ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This paper considers multiprocessor task scheduling in a multistage hybrid flowshop environment. The objective is to minimize the makespan, i.e. the completion time of all the tasks in the last stage. This problem is of practical interest in the textile and process industries. A genetic algorithm (GA) is developed to solve the problem. The GA is tested against a lower bound from the literature as well as against heuristic rules on a test bed comprised of 400 problems with up to 100 jobs, 10 stages, and with up to 5 processors on each stage. For small problems, solutions found by the GA are compared to optimal solutions, which are obtained by total enumeration. For larger problems, optimum solutions are estimated by a statistical prediction technique. Computational results show that the GA is both effective and efficient for the current problem. Test problems are provided in a web site at www.benchmark.ibu.edu.tr/mpthfsp Key words: multiprocessor tasks, hybrid flowshops, makespan minimization, genetic algorithms.
General Multiprocessor Task Scheduling: Approximate Solutions in Linear Time
 Proceedings 6th International Workshop on Algorithms and Data Structures, LNCS 1663
, 1999
"... We study the problem of scheduling a set of n independent tasks on a fixed number of parallel processors, where the execution time of a task is a function of the subset of processors assigned to the task. We propose a fully polynomial approximation scheme that for any fixed > finds a preemptive s ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We study the problem of scheduling a set of n independent tasks on a fixed number of parallel processors, where the execution time of a task is a function of the subset of processors assigned to the task. We propose a fully polynomial approximation scheme that for any fixed > finds a preemptive schedule of length at most (1 + ) times the optimum in O(n) time. We also discuss the nonpreemptive variant of the problem, and present a polynomial approximation scheme that computes an approximate solution of any fixed accuracy in linear time. In terms of the running time, this linear complexity bound gives a substantial improvement of the best previously known polynomial bound [7].