Results 1  10
of
20
Scheduling in the Dark
, 1999
"... We considered nonclairvoyant multiprocessor scheduling of jobs with arbitrary arrival times and changing execution characteristics. The problem has been studied extensively when either the jobs all arrive at time zero, or when all the jobs are fully parallelizable, or when the scheduler has conside ..."
Abstract

Cited by 71 (15 self)
 Add to MetaCart
We considered nonclairvoyant multiprocessor scheduling of jobs with arbitrary arrival times and changing execution characteristics. The problem has been studied extensively when either the jobs all arrive at time zero, or when all the jobs are fully parallelizable, or when the scheduler has considerable knowledge about the jobs. This paper considers for the first time this problem without any of these three restrictions and provides new upper and lower bound techniques applicable in this more difficult scenario. The results are of both theoretical and practical interest. In our model, a job can arrive at any arbitrary time and its execution characteristics can change through the life of the job from being anywhere from fully parallelizable to completely sequential. We assume that the scheduler has no knowledge about the jobs except for knowing when a job arrives and knowing when it completes. (This is why we say that the scheduler is completely in the dark.) Given all this, we prove t...
OnLine Scheduling  A Survey
, 1997
"... Scheduling has been studied extensively in many varieties and from many viewpoints. Inspired by applications in practical computer systems, it developed into a theoretical area with many interesting results, both positive and negative. The basic situation we study is the following. We have some sequ ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
Scheduling has been studied extensively in many varieties and from many viewpoints. Inspired by applications in practical computer systems, it developed into a theoretical area with many interesting results, both positive and negative. The basic situation we study is the following. We have some sequence of jobs that have to be processed on the machines available to us. In the most basic problem, each job is characterized by its running time and has to be scheduled for that time on one of the machines. In other variants there may be additional restrictions or relaxations specifying which schedules are allowed. We want to schedule the jobs as efficiently as possible, which most often means that the total length of the schedule (the makespan) should be as small as possible, but other objective functions are also considered. The notion of an online algorithm is intended to formalize the realistic scenario, where the algorithm does not have the access to the whole inp...
Preemptive scheduling of parallel jobs on multiprocessors
 In SODA
, 1996
"... Abstract. We study the problem of processor scheduling for n parallel jobs applying the method of competitive analysis. We prove that for jobs with a single phase of parallelism, a preemptive scheduling algorithm without information about job execution time can achieve a mean completion time within ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Abstract. We study the problem of processor scheduling for n parallel jobs applying the method of competitive analysis. We prove that for jobs with a single phase of parallelism, a preemptive scheduling algorithm without information about job execution time can achieve a mean completion time within 2 − 2 2 times the optimum. In other words, we prove a competitive ratio of 2 − n+1 n+1. The result is extended to jobs with multiple phases of parallelism (which can be used to model jobs with sublinear speedup) and to interactive jobs (with phases during which the job has no CPU requirements) to derive solutions guaranteed to be within 4 − 4 times the optimum. In comparison n+1 with previous work, our assumption that job execution times are unknown prior to their completion is more realistic, our multiphased job model is more general, and our approximation ratio (for jobs with a single phase of parallelism) is tighter and cannot be improved. While this work presents theoretical results obtained using competitive analysis, we believe that the results provide insight into the performance of practical multiprocessor scheduling algorithms that operate in the absence of complete information.
Online scheduling
 Online Algorithms, Lecture Notes in Computer Science 1442
, 1998
"... Scheduling has been studied extensively in many varieties and from many viewpoints. Inspired by applications in practical computer systems, it developed into a theoretical area with many interesting results, both positive and negative. The basic situation we study is the following. We have some sequ ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
Scheduling has been studied extensively in many varieties and from many viewpoints. Inspired by applications in practical computer systems, it developed into a theoretical area with many interesting results, both positive and negative. The basic situation we study is the following. We have some sequence of jobs
Tradeoffs between Speed and Processor in Harddeadline Scheduling
, 1999
"... This paper revisits the problem of online scheduling of sequential jobs with hard deadlines in a preemptive, multiprocessor setting. An online scheduling algorithm is said to be optimal if it can schedule any set of jobs to meet their deadlines whenever it is feasible in the offline sense. It is ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
This paper revisits the problem of online scheduling of sequential jobs with hard deadlines in a preemptive, multiprocessor setting. An online scheduling algorithm is said to be optimal if it can schedule any set of jobs to meet their deadlines whenever it is feasible in the offline sense. It is known that the earliestdeadlinefirst strategy (EDF) is optimal in a oneprocessor setting, and there is no optimal online algorithm in an mprocessor setting where m 2. Recent work [Phillips et al. stoc 97] however reveals that if the online algorithm is given faster processors, EDF is actually optimal for all m (e.g., when m = 2, it suffices to use processors 1.5 times as fast). This paper initiates the study of the tradeoff between increasing the speed and using more processors in deriving optimal online scheduling algorithms. Several upper bound and lower bound results are presented. For example, the speed requirement of EDF can be reduced to 2 \Gamma 1+p m+p when it is given p ...
Transactional Contention Management as a NonClairvoyant Scheduling Problem ∗
, 2007
"... The transactional approach to contention management guarantees atomicity by making sure that whenever two transactions have a conflict on a resource, only one of them proceeds. A major challenge in implementing this approach lies in guaranteeing progress, since transactions are often restarted. Insp ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The transactional approach to contention management guarantees atomicity by making sure that whenever two transactions have a conflict on a resource, only one of them proceeds. A major challenge in implementing this approach lies in guaranteeing progress, since transactions are often restarted. Inspired by the paradigm of nonclairvoyant job scheduling, we analyze the performance of a contention manager by comparison with an optimal, clairvoyant contention manager that knows the list of resource accesses that will be performed by each transaction, as well as its release time and duration. The realistic, nonclairvoyant contention manager is evaluated by the competitive ratio between the last completion time (makespan) it provides and the makespan provided by an optimal contention manager. Assuming that the amount of exclusive accesses to the resources is nonnegligible, we present a simple proof that every work conserving contention manager guaranteeing the pending commit property achieves an O(s) competitive ratio, where s is the number of resources. This bound holds for the Greedy contention manager studied by Guerraoui et
Adaptive Work Stealing with Parallelism Feedback
"... Abstract We present an adaptive workstealing thread scheduler, ASTEAL, for forkjoin multithreaded jobs, like those written using the Cilk multithreaded language or the Hood workstealinglibrary. The ASTEAL algorithm is appropriate for large parallel servers where many jobs share a common multipr ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Abstract We present an adaptive workstealing thread scheduler, ASTEAL, for forkjoin multithreaded jobs, like those written using the Cilk multithreaded language or the Hood workstealinglibrary. The ASTEAL algorithm is appropriate for large parallel servers where many jobs share a common multiprocessorresource and in which the number of processors available to a particular job may vary during the job's execution. ASTEALprovides continual parallelism feedback to a job scheduler in the form of processor requests, and the job must adapt its execution to the processors allotted to it. Assuming that the job scheduler never allots any job more processors than requestedby the job's thread scheduler, ASTEAL guarantees that the job completes in nearoptimal time while utilizing at least a constant fraction of the allotted processors. Our analysis models the job scheduler as the thread scheduler's adversary, challenging the thread scheduler to be robust to the system environment and the job scheduler's administrative policies. We analyze the performance of ASTEAL using "trim analysis, " which allows us to prove that our thread scheduler performs poorly on at most a small number of time steps, while exhibiting nearoptimal behavior on the vast majority.To be precise, suppose that a job has work T1 and criticalpath length T1. On a machine with P processors, ASTEALcompletes the job in expected O(T1/eP + T1 + L lg P) timesteps, where L is the length of a scheduling quantum and ePdenotes the O(T1 + L lg P)trimmed availability. This quantity is the average of the processor availability over all but
Multicore realtime scheduling for generalized parallel task models
 In Proc. of the 32nd RealTime Sys. Symp
, 2011
"... Abstract—Multicore processors offer a significant performance increase over single core processors. Therefore, they have the potential to enable computationintensive realtime applications with stringent timing constraints that cannot be met on traditional singlecore processors. However, most res ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Abstract—Multicore processors offer a significant performance increase over single core processors. Therefore, they have the potential to enable computationintensive realtime applications with stringent timing constraints that cannot be met on traditional singlecore processors. However, most results in traditional multiprocessor realtime scheduling are limited to sequential programming models and ignore intratask parallelism. In this paper, we address the problem of scheduling periodic parallel tasks with implicit deadlines on multicore processors. We first consider a synchronous task model where each task consists of segments, each segment having an arbitrary number of parallel threads that synchronize at the end of the segment. We propose a new task decomposition method that decomposes each parallel task into a set of sequential tasks. We prove that our task decomposition achieves a resource augmentation bound of 4 and 5 when the decomposed tasks are scheduled using global EDF and partitioned deadline monotonic scheduling, respectively. Finally, we extend our analysis to directed acyclic graph (DAG) task model where each node in the DAG has unit execution requirement. We show how these tasks can be converted into synchronous tasks such that the same transformation can be applied and the same augmentation bounds hold. Keywordsparallel task; multicore processor; realtime scheduling; resource augmentation bound. I.
Improving Parallel Job Scheduling Using Runtime Measurements
"... We investigate the use of runtime measurements to improve job scheduling on a parallel machine. Emphasis is on gang scheduling based strategies. With the information gathered at runtime, we define a task classification scheme based on fuzzy logic and Bayesian estimators. The resulting local tas ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We investigate the use of runtime measurements to improve job scheduling on a parallel machine. Emphasis is on gang scheduling based strategies. With the information gathered at runtime, we define a task classification scheme based on fuzzy logic and Bayesian estimators. The resulting local task classification is used to provide better service to I/O bound and interactive jobs under gang scheduling. This is achieved through the use of idle times and also by controlling the spinning time of a task in the spin block mechanism depending on the node's workload. Simulation results show considerable improvements, in particular for I/O bound workloads, in both throughput and machine utilization for a gang scheduler using runtime information compared with gang schedulers for which this type of information is not available.
Provably efficient twolevel adaptive scheduling
 In JSSPP, SaintMalo
, 2006
"... Abstract. Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernellevel job scheduler allots processors to jobs and a userlevel thread scheduler maps the ready threads of a job onto the allotted processors. This paper presents twolevel sch ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
Abstract. Multiprocessor scheduling in a shared multiprogramming environment can be structured in two levels, where a kernellevel job scheduler allots processors to jobs and a userlevel thread scheduler maps the ready threads of a job onto the allotted processors. This paper presents twolevel scheduling schemes for scheduling “adaptive ” multithreaded jobs whose parallelism can change during execution. The AGDEQ algorithm uses dynamicequipartioning (DEQ) as a jobscheduling policy and an adaptive greedy algorithm (AGreedy) as the thread scheduler. The ASDEQ algorithm uses DEQ for job scheduling and an adaptive workstealing algorithm (ASteal) as the thread scheduler. AGDEQ is suitable for scheduling in centralized scheduling environments, and ASDEQ is suitable for more decentralized settings. Both twolevel schedulers achieve O(1)competitiveness with respect to makespan for any set of multithreaded jobs with arbitrary release time. They are also O(1)competitive for any batched jobs with respect to mean response time. Moreover, because the length of the scheduling quantum can be adjusted to amortize the cost of contextswitching during processor reallocation, our schedulers provide control over the scheduling overhead and ensure effective utilization of processors. 1