Results 1  10
of
19
Online Scheduling
, 2003
"... In this chapter, we summarize research efforts on several different problems that fall under the rubric of online scheduling. In online scheduling, the scheduler receives jobs that arrive over time, and generally must schedule the jobs without any knowledge of the future. The lack of knowledge of th ..."
Abstract

Cited by 62 (5 self)
 Add to MetaCart
In this chapter, we summarize research efforts on several different problems that fall under the rubric of online scheduling. In online scheduling, the scheduler receives jobs that arrive over time, and generally must schedule the jobs without any knowledge of the future. The lack of knowledge of the future generally precludes the scheduler from guaranteeing optimal schedules. Thus much research has been focused on finding scheduling algorithms that guarantee schedules that are in some way not too far from optimal. We focus on problems that arise within the ubiquitous clientserver setting. In a clientserver system, there are many clients and one server (or a perhaps a few servers). Clients submit requests for service to the server(s) over time. In the language of scheduling, a server is a processor, and a request is a job. Applications that motivate the research we survey include multiuser operating systems such as Unix and Windows, web servers, database servers, name servers, and load...
Speed Scaling Functions for Flow Time Scheduling based on Active Job Count
"... Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of activ ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
Abstract. We study online scheduling to minimize flow time plus energy usage in the dynamic speed scaling model. We devise new speed scaling functions that depend on the number of active jobs, replacing the existing speed scaling functions in the literature that depend on the remaining work of active jobs. The new speed functions are more stable and also more efficient. They can support better job selection strategies to improve the competitive ratios of existing algorithms [5,8], and, more importantly, to remove the requirement of extra speed. These functions further distinguish themselves from others as they can readily be used in the nonclairvoyant model (where the size of a job is only known when the job finishes). As a first step, we study the scheduling of batched jobs (i.e., jobs with the same release time) in the nonclairvoyant model and present the first competitive algorithm for minimizing flow time plus energy (as well as for weighted flow time plus energy); the performance is close to optimal. 1
Scheduling for speed bounded processors
 In Proc. ICALP
, 2008
"... Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed T. The processor uses energy at rate s α when run at speed s, where α> 1 is a constant. Most modern processors use dynamic speed scaling to manage their energy usage. This leads to the problem of designing execution strategies that are both energy efficient, and yet have almost optimum performance. We consider two problems in this model and give essentially optimum possible algorithms for them. In the first problem, jobs with arbitrary sizes and deadlines arrive online and the goal is to maximize the throughput, i.e. the total size of jobs completed successfully. We give an algorithm that is 4competitive for throughput and O(1)competitive for the energy used. This improves upon the 14 throughput competitive algorithm of Chan et al. [10]. Our throughput guarantee is optimal as any online algorithm must be at least 4competitive even if the energy concern is ignored [7]. In the second problem, we consider optimizing the tradeoff between the total flow time incurred and the energy consumed by the jobs. We give a 4competitive algorithm to minimize total flow time plus energy for unweighted unit size jobs, and a (2 + o(1))α / ln αcompetitive algorithm to minimize fractional weighted flow time plus energy. Prior to our work, these guarantees were known only when the processor speed was unbounded (T = ∞) [4]. 1
Server Scheduling in the L_p Norm: A Rising Tide Lifts All Boat (Extended Abstract)
, 2003
"... Often server systems do not implement the best known algorithms for optimizing average Quality of Service (QoS) out of concern of that these algorithms may be insu#ciently fair to individual jobs. The standard method for balancing average QoS and fairness is optimize the Lp metric, 1 <p<#. T ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Often server systems do not implement the best known algorithms for optimizing average Quality of Service (QoS) out of concern of that these algorithms may be insu#ciently fair to individual jobs. The standard method for balancing average QoS and fairness is optimize the Lp metric, 1 <p<#. Thus we consider server scheduling strategies to optimize the Lp norms of the standard QoS measures, flow and stretch. We first show that there is no n competitive online algorithm for the Lp norms of either flow or stretch. We then show that the standard clairvoyant algorithms for optimizing average QoS, SJF and SRPT, are O(1+#)speed O(1/# ) competitive for the Lp norms of flow and stretch. And that the standard nonclairvoyant algorithm for optimizing average QoS, SETF,isO(1+#)speed O(1/# )competitive for the Lp norms of flow. These results argue that these standard algorithms will not starve jobs until the system is near peak capacity. In contrast, we show that the Round Robin, or Processor Sharing algorithm, which is sometimes adopted because of its seeming fairness properties, is not O(1 + #)speed n competitive for sufficiently small #.
Greedy Multiprocessor Server Scheduling
"... We show that the greedy Highest Density First (HDF) algorithm is (1+ ɛ)speed O(1)competitive for the problem of minimizing the ℓp norms of weighted flow time on m identical machines. Similar results for minimizing unweighted flow provide insight into the power of migration. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We show that the greedy Highest Density First (HDF) algorithm is (1+ ɛ)speed O(1)competitive for the problem of minimizing the ℓp norms of weighted flow time on m identical machines. Similar results for minimizing unweighted flow provide insight into the power of migration.
Extra unitspeed machines are almost as powerful as speedy machines for competitive flow time scheduling
 In Proc. 17th Symp. on Discrete Algorithms
, 2006
"... Abstract. We study online scheduling of jobs to minimize the flow time and stretch on parallel machines. We consider algorithms that are given extra resources so as to compensate the lack of future information. Recent results show that a modest increase in machine speed can provide very competitive ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We study online scheduling of jobs to minimize the flow time and stretch on parallel machines. We consider algorithms that are given extra resources so as to compensate the lack of future information. Recent results show that a modest increase in machine speed can provide very competitive performance; in particular, using O(1) times faster machines, the algorithm SRPT (shortest remaining processing time) is 1competitive for both flow time [23] and stretch [12], and HDF (highest density first) is O(1)competitive for weighted flow time [6]. Using extra unitspeed machines instead of faster machines is more challenging. This paper gives a nontrivial relationship between the extraspeed and extramachine analysis. It shows that competitive results via faster machines can be transformed to similar results via extra machines, and hence giving the first algorithms that, using O(1) times unitspeed machines, are 1competitive for flow time and stretch and O(1)competitive for weighted flow time, respectively. 1
Scheduling on a single machine to minimize total flow time with job rejections
 In Proc. 2nd Multidisciplinary Intern. Conf. on Scheduling: Theory and Applications
, 2005
"... Abstract We consider the problem of minimizing flow time on a single machine supporting preemption that can reject jobs at a cost. Even if all jobs have the same rejection cost, we show that no online algorithm can have competitive ratio better than (2+ √ 2)/2 ≈ 1.707 in general or e/(e −1) ≈ 1.582 ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract We consider the problem of minimizing flow time on a single machine supporting preemption that can reject jobs at a cost. Even if all jobs have the same rejection cost, we show that no online algorithm can have competitive ratio better than (2+ √ 2)/2 ≈ 1.707 in general or e/(e −1) ≈ 1.582 if all jobs are known to have the same processing time. We also give an optimal offline algorithm for unitlength jobs with arbitrary rejection costs. This leads to a pair of 2competitive online algorithms for unitlength jobs, one when all rejection costs are equal and one when they are arbitrary. Finally, we show that the offline problem is NPhard even when each job’s rejection cost is proportional to its processing time.
How UnsplittableFlowCovering Helps Scheduling with JobDependent Cost Functions,
"... Abstract. Generalizing many wellknown and natural scheduling problems, scheduling with jobspecific cost functions has gained a lot of attention recently. In this setting, each job incurs a cost depending on its completion time, given by a private cost function, and one seeks to schedule the jobs ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Generalizing many wellknown and natural scheduling problems, scheduling with jobspecific cost functions has gained a lot of attention recently. In this setting, each job incurs a cost depending on its completion time, given by a private cost function, and one seeks to schedule the jobs to minimize the total sum of these costs. The framework captures many important scheduling objectives such as weighted flow time or weighted tardiness. Still, the general case as well as the mentioned special cases are far from being very well understood yet, even for only one machine. Aiming for better general understanding of this problem, in this paper we focus on the case of uniform job release dates on one machine for which the state of the art is a 4approximation algorithm. This is true even for a special case that is equivalent to the covering version of the wellstudied and prominent unsplittable flow on a path problem, which is interesting in its own right. For that covering problem, we present a quasipolynomial time (1 + ε)approximation algorithm that yields an (e + ε)approximation for the above scheduling problem. Moreover, for the latter we devise the best possible resource augmentation result regarding speed: a polynomial time algorithm which computes a solution with optimal cost at 1 + ε speedup. Finally, we present an elegant QPTAS for the special case where the cost functions of the jobs fall into at most log n many classes. This algorithm allows the jobs even to have up to log n many distinct release dates. All proposed quasipolynomial time algorithms require the input data to be quasipolynomially bounded. 1
ShortestElapsedTimeFirst on a Multiprocessor
"... “I would like to call it a corollary of Moore’s Law that the number of cores will double every 18 months. ” — Anant Agarwal, founder and chief technology officer of MIT startup Tilera Abstract. We show that SETF, the idealized version of the uniprocessor scheduling algorithm used by Unix, is scala ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
“I would like to call it a corollary of Moore’s Law that the number of cores will double every 18 months. ” — Anant Agarwal, founder and chief technology officer of MIT startup Tilera Abstract. We show that SETF, the idealized version of the uniprocessor scheduling algorithm used by Unix, is scalable for the objective of fractional flow on a homogeneous multiprocessor. We also give a potential function analysis for the objective of weighted fractional flow on a uniprocessor. 1