Results 11  20
of
53
Backfilling with lookahead to optimize the packing of parallel jobs
 Journal of Parallel and Distributed Computing, Elsevier Science
"... The utilization of parallel computers depends on how jobs are packed together: if the jobs are not packed tightly, resources are lost due to fragmentation. The problem is that the goal of high utilization may conflict with goals of fairness or even progress for all jobs. The common solution is to us ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
The utilization of parallel computers depends on how jobs are packed together: if the jobs are not packed tightly, resources are lost due to fragmentation. The problem is that the goal of high utilization may conflict with goals of fairness or even progress for all jobs. The common solution is to use backfilling, which combines a reservation for the first job in the interest of progress with packing of later jobs to fill in holes and increase utilization. However, backfilling considers the queued jobs one at a time, and thus might miss better packing opportunities. We propose the use of dynamic programming to find the best packing possible given the current composition of the queue, thus maximizing the utilization on every scheduling step. Simulations of this algorithm, called LOS (Lookahead Optimizing Scheduler), using trace files from several IBM SP parallel systems, show that LOS indeed improves utilization, and thereby reduces the mean response time and mean slowdown of all jobs. Moreover, it is actually possible to limit the lookahead depth to about 50 jobs and still achieve essentially the same results. Finally, we experimented with selecting among alternative sets of jobs that achieve the same utilization. Surprising results indicate that choosing the set at the head of the queue does not necessarily guarantee best performance. Instead, repeatedly selecting the set with the maximal overall expected slowdown boosts performance when compared to all other alternatives checked. 1
From fluid relaxations to practical algorithms for job shop scheduling: the makespan objective
 Mathematical Programming
, 2002
"... We design an algorithm for the highmultiplicity jobshop scheduling problem with the objective of minimizing the total holding cost by appropriately rounding an optimal solution to a fluid relaxation in which we replace discrete jobs with the flow of a continuous fluid. The algorithm solves the flu ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We design an algorithm for the highmultiplicity jobshop scheduling problem with the objective of minimizing the total holding cost by appropriately rounding an optimal solution to a fluid relaxation in which we replace discrete jobs with the flow of a continuous fluid. The algorithm solves the fluid relaxation optimally and then aims to keep the schedule in the discrete network close to the schedule given by the fluid relaxation. If the number of jobs from each type grow linearly with N,then the algorithm is within an additive factor O�N � from the optimal (which scales as O�N 2�); thus,it is asymptotically optimal. We report computational results on benchmark instances chosen from the OR library comparing the performance of the proposed algorithm and several commonly used heuristic methods. These results suggest that for problems of moderate to high multiplicity,the proposed algorithm outperforms these methods,and for very high multiplicity the overperformance is dramatic. For problems of low to moderate multiplicity,however,the relative errors of the heuristic methods are comparable to those of the proposed algorithm,and the best of these methods performs better overall than the proposed method. Received December 1999; revisions received July 2000,September 2001; accepted September 2002. Subject classifications: Production/scheduling,deterministic: approximation algorithms for deterministic job shops. Queues,optimization: asymptotically optimal solutions to queueing networks. Area of review: Manufacturing,Service,and Supply Chain Operations. 1.
Online scheduling for sorting buffers
 In Proceedings of the 10th European Symposium on Algorithms (ESA
, 2002
"... Abstract. We introduce the online scheduling problem for sorting buffers. A service station and a sorting buffer are given. An input sequence of items which are only characterized by a specific attribute has to be processed by the service station which benefits from consecutive items with the same a ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. We introduce the online scheduling problem for sorting buffers. A service station and a sorting buffer are given. An input sequence of items which are only characterized by a specific attribute has to be processed by the service station which benefits from consecutive items with the same attribute value. The sorting buffer which is a random access buffer with storage capacity for k items can be used to rearrange the input sequence. The goal is to minimize the cost of the service station, i.e., the number of maximal subsequences in its sequence of items containing only items with the same attribute value. This problem is motivated by many applications in computer science and economics. The strategies are evaluated in a competitive analysis in which the cost of the online strategy is compared with the cost of an optimal offline strategy. Our main result is a deterministic strategy that achieves a competitive ratio of O(log 2 k). In addition, we show that several standard strategies are unsuitable for this problem, i.e., we prove a lower bound of Ω ( √ k) on the competitive ratio of the First In First Out (FIFO) and Least Recently Used (LRU) strategy and of Ω(k) on the competitive ratio of the Largest Color First (LCF) strategy. 1
The Pochoir Stencil Compiler
"... A stencil computation repeatedly updates each point of a ddimensionalgridasafunctionofitselfanditsnearneighbors. Parallel cacheefficient stencil algorithms based on “trapezoidal decompositions” are known, but most programmers find them difficult to write. The Pochoir stencil compiler allows a prog ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
A stencil computation repeatedly updates each point of a ddimensionalgridasafunctionofitselfanditsnearneighbors. Parallel cacheefficient stencil algorithms based on “trapezoidal decompositions” are known, but most programmers find them difficult to write. The Pochoir stencil compiler allows a programmer to write a simple specification of a stencil in a domainspecific stencil language embedded in C++ which the Pochoir compiler then translates into highperforming Cilk code that employs an efficient parallel cacheoblivious algorithm. Pochoir supports general ddimensional stencils and handles both periodic and aperiodic boundary conditions in one unified algorithm. The Pochoir system provides a C++ template library that allows the user’s stencilspecificationtobeexecuteddirectlyinC++withoutthePochoir compiler(albeitmoreslowly),whichsimplifiesuserdebuggingand greatly simplified the implementation of the Pochoir compiler itself. A host of stencil benchmarks run on a modern multicore machine demonstrates that Pochoir outperforms standard parallelloop implementations, typically running 2–10 times faster. The algorithm behind Pochoir improves on prior cacheefficient algorithms on multidimensional grids by making “hyperspace ” cuts, which yield asymptotically more parallelism for the same cache efficiency. Categories andSubjectDescriptors
The Lazy Bureaucrat Scheduling Problem
 Information and Computation
, 1999
"... We introduce a new class of scheduling problems in which the optimization is performed by the worker (single \machine") who performs the tasks. The worker's objective may be to minimize the amount of work he does (he is \lazy"). He is subject to a constraint that he must be busy when there is work ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We introduce a new class of scheduling problems in which the optimization is performed by the worker (single \machine") who performs the tasks. The worker's objective may be to minimize the amount of work he does (he is \lazy"). He is subject to a constraint that he must be busy when there is work that he can do; we make this notion precise, particularly in the case in which preemption is allowed. The resulting class of \perverse" scheduling problems, which we term \Lazy Bureaucrat Problems," gives rise to a rich set of new questions that explore the distinction between maximization and minimization in computing optimal schedules. 1 Introduction Scheduling problems have been studied extensively from the point of view of the objectives of the enterprise that stands to gain from the completion of the set of jobs. We take a new look at the problem from the point of view of the workers who perform the tasks that earn the company its prots. In fact, it is natural to expect that som...
Traffic aided opportunistic scheduling for wireless networks: Algorithms and performance bounds
 in IEEE Infocom 2004, (Hongkong
, 2004
"... In multiuser wireless networks, opportunistic scheduling can improve the system throughput and thus reduce the total completion time. In this paper, we explore the possibility of reducing the completion time further by incorporating traffic information into opportunistic scheduling. More specifical ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In multiuser wireless networks, opportunistic scheduling can improve the system throughput and thus reduce the total completion time. In this paper, we explore the possibility of reducing the completion time further by incorporating traffic information into opportunistic scheduling. More specifically, we first establish convexity properties for opportunistic scheduling with file size information. Then, we develop new traffic aided opportunistic scheduling (TAOS) schemes by making use of file size information and channel variation in a unified manner. We also derive lower bounds and upper bounds on the total completion time, which serve as benchmarks for examining the performance of the TAOS schemes. Our results show that the proposed TAOS schemes can yield significant reduction in the total completion time. The impact of fading, file size distributions, and random arrivals and departures, on the system performance, is also investigated. In particular, in the presence of user dynamics, the proposed TAOS schemes perform well when the arrival rate is reasonably high.
Online scheduling to minimize the maximum delay factor
 IN SODA 09: PROCEEDINGS OF THE TWENTIETH ANNUAL ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 2009
"... In this paper two scheduling models are addressed. First is the standard model (unicast) where requests (or jobs) are independent. The other is the broadcast model where broadcasting a page can satisfy multiple outstanding requests for that page. We consider online scheduling of requests when they h ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In this paper two scheduling models are addressed. First is the standard model (unicast) where requests (or jobs) are independent. The other is the broadcast model where broadcasting a page can satisfy multiple outstanding requests for that page. We consider online scheduling of requests when they have deadlines. Unlike previous models, which mainly consider the objective of maximizing throughput while respecting deadlines, here we focus on scheduling all the given requests with the goal of minimizing the maximum delay factor. The delay factor of a schedule is defined to be the minimum α ≥ 1 such that each request i is completed by time ai + α(di − ai) where ai is the arrival time of request i and di is its deadline. Delay factor generalizes the previously defined measure of maximum stretch which is based only the processing times of requests [9, 11]. We prove strong lower bounds on the achievable competitive ratios for delay factor scheduling even with unittime requests. Motivated by this, we consider resource augmentation analysis [24] and prove the following positive results. For the unicast model we give algorithms that are (1 + ɛ)speed O ( 1 ɛ)competitive in both the single machine and multiple machine settings. In the broadcast model we give an algorithm for samesized pages that is (2 + ɛ)speed O ( 1 ɛ 2)competitive. For arbitrary page sizes we give an algorithm that is (4 + ɛ)speed O ( 1 ɛ 2)competitive.
Evaluation of Packet Scheduling Algorithms in Mobile Ad Hoc Networks
 ACM SIGMOBILE Mobile Computing and Communications Review
, 2002
"... this paper, we analyze different packet scheduling algorithms to find those that most improve performance in congested networks ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
this paper, we analyze different packet scheduling algorithms to find those that most improve performance in congested networks