Results 1  10
of
12
Scalably scheduling processes with arbitrary speedup curves
 In ACMSIAM Symposium on Discrete algorithms. Society for Industrial and Applied Mathematics
, 2009
"... “With multicore it’s like we are throwing this Hail Mary pass down the field and now we have to run down there as fast as we can to see if we can catch it.” — David Patterson, UC Berkeley computer science professor We give a scalable ((1+ǫ)speed O(1)competitive) nonclairvoyant algorithm for sched ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
(Show Context)
“With multicore it’s like we are throwing this Hail Mary pass down the field and now we have to run down there as fast as we can to see if we can catch it.” — David Patterson, UC Berkeley computer science professor We give a scalable ((1+ǫ)speed O(1)competitive) nonclairvoyant algorithm for scheduling jobs with sublinear nondecreasing speedup curves on multiple processors with the objective of average response time. 1
Speed Scaling of Processes with Arbitrary Speedup Curves on a Multiprocessor
"... We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant scheduler must assign the processes to processors, and scale the speeds of the processors. We consider th ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
(Show Context)
We consider the setting of a multiprocessor where the speeds of the m processors can be individually scaled. Jobs arrive over time and have varying degrees of parallelizability. A nonclairvoyant scheduler must assign the processes to processors, and scale the speeds of the processors. We consider the objective of energy plus flow time. We assume that a processor running at speed s uses power sα for some constant α> 1. For processes that may have side effects or that are not checkpointable, we show an Ω(m (α−1)/α2) bound on the competitive ratio of any randomized algorithm. For checkpointable processes without side effects, we give an O(logm)competitive algorithm. Thus for processes that may have side effects or that are not checkpointable, the achievable competitive ratio grows quickly with the number of processors, but for checkpointable processes without side effects, the achievable competitive ratio grows slowly with the number of processors. We then show a lower bound of Ω(log1/α m) on the competitive ratio of any randomized algorithm for checkpointable processes without side effects. 1
Scheduling jobs with varying parallelizability to reduce variance
 In SPAA ’10: 22nd ACM Symposium on Parallelism in Algorithms and Architectures
, 2010
"... We give a (2+ɛ)speed O(1)competitive algorithm for scheduling jobs with arbitrary speedup curves for the ℓ2 norm of flow. We give a similar result for the broadcast setting with varying page sizes. ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
(Show Context)
We give a (2+ɛ)speed O(1)competitive algorithm for scheduling jobs with arbitrary speedup curves for the ℓ2 norm of flow. We give a similar result for the broadcast setting with varying page sizes.
Competitive TwoLevel Adaptive Scheduling Using Resource Augmentation
"... Abstract. As multicore processors proliferate, it has become more important than ever to ensure efficient execution of parallel jobs on multiprocessor systems. In this paper, we study the problem of scheduling parallel jobs with arbitrary release time on multiprocessors while minimizing the jobs ’ ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Abstract. As multicore processors proliferate, it has become more important than ever to ensure efficient execution of parallel jobs on multiprocessor systems. In this paper, we study the problem of scheduling parallel jobs with arbitrary release time on multiprocessors while minimizing the jobs ’ mean response time. We focus on nonclairvoyant scheduling schemes that adaptively reallocate processors based on periodic feedbacks from the individual jobs. Since it is known that no deterministic nonclairvoyant algorithm is competitive for this problem, we focus on resource augmentation analysis, and show that two adaptive algorithms, Agdeq and Abgdeq, achieve competitive performance using O(1) times faster processors than the adversary. These results are obtained through a general framework for analyzing the mean response time of any twolevel adaptive scheduler. Our simulation results verify the effectiveness of Agdeq and Abgdeq by evaluating their performances over a wide range of workloads consisting of synthetic parallel jobs with different parallelism characteristics.
Competitive algorithms from competitive equilibria: nonclairvoyant scheduling under polyhedral constraints
 In Symposium on Theory of Computing, STOC 2014
"... We introduce and study a general scheduling problem that we term the Packing Scheduling problem (PSP). In this problem, jobs can have different arrival times and sizes; a scheduler can process job j at rate xj, subject to arbitrary packing constraints over the set of rates (x) of the outstanding job ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We introduce and study a general scheduling problem that we term the Packing Scheduling problem (PSP). In this problem, jobs can have different arrival times and sizes; a scheduler can process job j at rate xj, subject to arbitrary packing constraints over the set of rates (x) of the outstanding jobs. The PSP framework captures a variety of scheduling problems, including the classical problems of unrelated machines scheduling, broadcast scheduling, and scheduling jobs of different parallelizability. It also captures scheduling constraints arising in diverse modern environments ranging from individual computer architectures to data centers. More concretely, PSP models multidimensional resource requirements and parallelizability, as well as network bandwidth requirements found in data center scheduling. In this paper, we design nonclairvoyant online algorithms for PSP and its special cases – in this setting, the scheduler is unaware of the sizes of jobs. Our results are summarized as follows. • For minimizing total weighted completion time, we show a O(1)competitive algorithm. Surprisingly, we achieve this result by applying the wellknown Proportional Fairness algorithm (PF) to perform allocations each time instant. Though PF has been extensively studied in the context of maximizing fairness in resource allocation, we present the first analysis in adversarial and gen
Online Scalable Scheduling for the ℓknorms of Flow Time Without Conservation of Work
"... We address the scheduling model of arbitrary speedup curves and the broadcast scheduling model. The former occurs when jobs are scheduled in a multicore system or on a cloud of machines. Here jobs can be sped up when given more processors or machines. However, the parallelizability of the jobs may ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We address the scheduling model of arbitrary speedup curves and the broadcast scheduling model. The former occurs when jobs are scheduled in a multicore system or on a cloud of machines. Here jobs can be sped up when given more processors or machines. However, the parallelizability of the jobs may vary and the algorithm is required to be oblivious of the parallelizability of a job. The latter model is natural in wireless and LAN networks where requests (or jobs) can be simultaneously satisfied together. Both settings are similar in that two schedules can do different amounts of work to satisfy all the jobs. We focus on optimizing the ℓk norms of flow time. Recently, Gupta et al. [24] gave a (k + ɛ)speed O(1)competitive algorithm for the ℓk norms of flow time in both scheduling settings for fixed k. Inspired by this work, we give the first analysis of a scalable algorithm, i.e. (1 + ɛ)speed O(1)competitive, for all ℓknorms of flow time in both settings for fixed k and 0 < ɛ ≤ 1. Both problems have a strong lower bound without resource augmentation, so this is the best result that can be shown in the worst case setting up to a constant factor in the competitive ratio.
ONLINE SCHEDULING ALGORITHMS FOR AVERAGE FLOW TIME AND ITS VARIANTS
, 2012
"... This dissertation focuses on scheduling problems that are found in a clientserver setting where multiple clients and one server (or multiple servers) are the participating entities. Clients send their requests to the server(s) over time, and the server needs to satisfy the requests using its resour ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
This dissertation focuses on scheduling problems that are found in a clientserver setting where multiple clients and one server (or multiple servers) are the participating entities. Clients send their requests to the server(s) over time, and the server needs to satisfy the requests using its resources. This setting is prevalent in many applications including multiuser operating systems, web servers, database servers, and so on. A natural objective for each client is to minimize the flow time (or equivalently response time) of her request, which is defined as its completion time minus its release time. The server, with multiple requests to serve in its queue, has to prioritize the requests for scheduling. Inherently, the server needs a global scheduling objective to optimize. We mainly study the scheduling objective of minimizing `knorms of flow time of all requests, where 1 ≤ k < ∞. These objectives can be used to balance average performance and fairness. A popular performance measure for online scheduling algorithms is competitive
Improved results for scheduling batched parallel jobs by using a generalized analysis framework
 Journal of Parallel and Distributed Computing
"... Abstract: We present two improved results for scheduling batched parallel jobs on multiprocessors with mean response time as the performance metric. These results are obtained by using a generalized analysis framework where the response time of the jobs is expressed in two contributing factors that ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract: We present two improved results for scheduling batched parallel jobs on multiprocessors with mean response time as the performance metric. These results are obtained by using a generalized analysis framework where the response time of the jobs is expressed in two contributing factors that directly impact a scheduler’s competitive ratio. Specifically, we show that the scheduler IGDEQ is 3competitive against the optimal while AGDEQ is 5.24competitive. These results improve the known competitive ratios of 4 and 10, obtained by Deng et al. and by He et al., respectively. For the common case where no fractional allotments are allowed, we show that slightly larger competitive ratios can be obtained by augmenting the schedulers with the roundrobin strategy. Keywords:Multiprocessor Scheduling, Batched parallel Jobs, Mean response time 1
Every Deterministic Nonclairvoyant Scheduler has a Suboptimal Load Threshold
"... We prove a surprising lower bound for resource augmented nonclairvoyant algorithms for scheduling jobs with sublinear nondecreasing speedup curves on multiple processors with the objective of average response time. Edmonds in STOC99 shows that the algorithm Equipartition is a (2+ǫ)speed Θ ( 1 ǫ) ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We prove a surprising lower bound for resource augmented nonclairvoyant algorithms for scheduling jobs with sublinear nondecreasing speedup curves on multiple processors with the objective of average response time. Edmonds in STOC99 shows that the algorithm Equipartition is a (2+ǫ)speed Θ ( 1 ǫ)competitive algorithm. We define its speed threshold to be 2 because it is constant competitive when given speed 2+ǫ but not when given speed 2. (Its load threshold is the inverse of its speed threshold.) The optimal speed threshold is 1 because then the algorithm is constant competitive no matter how little extra resources it is given. Edmonds and Pruhs in SODA09 imply that they have found such an algorithm. (They use the term scalable.) We, however, rebut that their algorithm only accomplishes this nondeterministically. They prove that for every ǫ> 0, there is an algorithm Alg ǫ that is (1+ǫ)speed O ( 1 ǫ 2)competitive. A problem, however, is that this algorithm Alg ǫ depends on ǫ. Hence, to have one algorithm it would have to runs Alg ǫ after nondeterministically guessing the correct ǫ. We prove that like Equipartition,
Open problem: Nonclairvoyant with precedence constraints: Towards
, 2010
"... a measure of the worst case degree of parallelism within a precedence constraints DAG structure ..."
Abstract
 Add to MetaCart
(Show Context)
a measure of the worst case degree of parallelism within a precedence constraints DAG structure