## Improving Dynamic Voltage Scaling Algorithms with PACE (2001)

### Cached

### Download Links

- [research.microsoft.com]
- [research.microsoft.com]
- [research.microsoft.com]
- [cse.unl.edu]
- [www-2.cs.cmu.edu]
- [www.eecs.harvard.edu]
- [ifrit.cs.berkeley.edu]
- DBLP

### Other Repositories/Bibliography

Citations: | 155 - 2 self |

### BibTeX

@MISC{Lorch01improvingdynamic,

author = {Jacob R. Lorch and Alan Jay Smith},

title = {Improving Dynamic Voltage Scaling Algorithms with PACE},

year = {2001}

}

### Years of Citing Articles

### OpenURL

### Abstract

This paper addresses algorithms for dynamically varying (scaling) CPU speed and voltage in order to save energy. Such scaling is useful and effective when it is immaterial when a task completes, as long as it meets some deadline. We show how to modify any scaling algorithm to keep performance the same but minimize expected energy consumption. We refer to our approach as PACE (Processor Acceleration to Conserve Energy) since the resulting schedule increases speed as the task progresses. Since PACE depends on the probability distribution of the task's work requirement, we present methods for estimating this distribution and evaluate these methods on a variety of real workloads. We also show how to approximate the optimal schedule with one that changes speed a limited number of times. Using PACE causes very little additional overhead, and yields substantial reductions in CPU energy consumption. Simulations using real workloads show it reduces the CPU energy consumption of previously published algorithms by up to 49.5%, with an average of 20.6%, without any effect on performance.

### Citations

3228 | Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
- Liu, Layland
- 1973
(Show Context)
Citation Context ...archers have studied CPU scheduling for decades. One important result is that if a set of tasks has feasible deadlines, scheduling them in increasing deadline order will always make all the deadlines =-=[11]-=-. Another useful result, described by Bl/azewicz et al. [2, pp. 346--350], is that when the rate of consumption of some resource is a convex function of CPU speed, an ideal schedule will run each task... |

2630 |
Density Estimation for Statistics and Data Analysis (Chapman and
- Silverman
- 1986
(Show Context)
Citation Context ...used near or after the deadline, and most tasks will complete before then. Kernel density estimation. The nonparametric method we consider is kernel density estimation, a popular nonparametric method =-=[22]-=-. This method builds up a distribution by adding up several little distributions, each centered on one of the sample points. The kernel function, K, determines the shape of these little distributions.... |

2188 |
Numerical recipes in C: The art of scientific computing
- Press, Flannery, et al.
- 1992
(Show Context)
Citation Context ...son and Kotz [9, p. 176]. It estimates a quantile using s Uq 3 p + 1 1 9 3 where Uq is the relevant quantile of the normal distribution. When needed, we can compute CDF values using methods in [18], but we avoid those methods when possible since they are computationally expensive. Normal. The second method we consider is the parametric method assuming a normal distribution. This assumption may ... |

1616 |
Designing the User Interface: Strategies for Effective Human-Computer Interaction: Fourth Edition, Addison-Wesley Publ
- Shneiderman, Plaisant
(Show Context)
Citation Context ...ll a deadline soft when a task should, but does not have to, complete by this time.) For example, user interface studies have shown that response times under 50-- 100 ms do not affect user think time =-=[21]-=-; we can thus make 50 ms the deadline for handling a user interface event. Also, multimedia operations with limited buffering, e.g. on real-time streams, need to complete processing a frame in time eq... |

715 |
The Art of Computer Systems Performance Analysis: Techniques for Experimental Design
- Jain
- 1991
(Show Context)
Citation Context ...t has two parameters: the shape and the scales. The probability density function is p(x) = x 1 e x=s=s (). Reasonable estimators for the model parameters are ^ = ^ 2 =^ 2 and ^s= ^ 2 =^ [8]. Maximum likelihood estimators also exist, but we do not use them, since (a) we cannot compute them precisely or easily, and (b) we have found that they generally do not work as well for our purposes... |

504 | Scheduling for reduced cpu energy
- Weiser, Welch, et al.
- 1994
(Show Context)
Citation Context ...see, e.g., [20]. For this reason, most research on scheduling for DVS has focused on heuristics for estimating CPU requirements and attempting to keep CPU speed as constant as possible. Weiser et al. =-=[23]-=- recommended interval-based algorithms for DVS. These divide time into fixed-length intervals and set each interval 's speed so that most work is completed by the interval's end. Chan et al. [4] refin... |

471 | Low-power cmos digital design
- CHANDRAKASAN, SHENG, et al.
- 1992
(Show Context)
Citation Context ...ble at a certain voltage is roughly proportional to that voltage (s / V ). A more accurate formula is s = k(V V th ) 2 =V where k is some constant of proportionality and V th is the threshold voltage =-=-=-[5]. So, instead of E / f 2 , as we were assuming in our proof of optimality, we have E / Vth + f 2k + q V th f k + f 2k 2 2 . This formula is complicated, but we can generally approximate it with ... |

445 | A Scheduling Model for Reduced CPU Energy
- Yao, Shenker
- 1995
(Show Context)
Citation Context ...d by Bl/azewicz et al. [2, pp. 346--350], is that when the rate of consumption of some resource is a convex function of CPU speed, an ideal schedule will run each task at a constant speed. Yao et al. =-=[24]-=- observe that with DVS, power consumption is a convex function of CPU speed. They show how to compute an optimal speed-setting policy by constructing an earliest-deadline-first schedule, and then choo... |

286 | Energy-aware adaptation for mobile applications
- Flinn, Satyanarayanan
- 1999
(Show Context)
Citation Context ... 331 Table 2: Animations used in the MPEG workloads is interactive, so we use a 50 ms deadline for each task. 6.4 Video playback Multimedia applications are becoming more common on portable computers =-=[6]-=-. Therefore, we include a movie player as one of our workloads. We use the MPEG player included with the Berkeley MPEG Tools developed by the Berkeley Multimedia Research Center (BMRC) [16]. Since the... |

281 | Comparing algorithm for dynamic speed-setting of a low-power cpu
- Govil, Chan, et al.
- 1995
(Show Context)
Citation Context ... al. [23] recommended interval-based algorithms for DVS. These divide time into fixed-length intervals and set each interval 's speed so that most work is completed by the interval's end. Chan et al. =-=[4]-=- refined these ideas by separating out an algorithm's two parts: prediction and speed-setting. When an interval begins, the prediction part predicts how busy the CPU will be during the interval (i.e.,... |

281 | et,al “The Simulation and Evaluation of Dynamic Voltage Scaling Algorithms
- Pering
- 1998
(Show Context)
Citation Context ...tting part uses this information to set the speed. They measure how busy the CPU is via the utilization, the fraction of the interval the CPU spends non-idle. Several authors, including Pering et al. =-=[17]-=- and Grunwald et al. [7], have shown that Weiser et al. and Chan et al.'s algorithms are impractical because they require knowledge of the future. However, they have proposed practical versions of the... |

154 | Policies for dynamic clock scheduling
- Grunwald, Levis, et al.
- 2000
(Show Context)
Citation Context ...rmation to set the speed. They measure how busy the CPU is via the utilization, the fraction of the interval the CPU spends non-idle. Several authors, including Pering et al. [17] and Grunwald et al. =-=[7]-=-, have shown that Weiser et al. and Chan et al.'s algorithms are impractical because they require knowledge of the future. However, they have proposed practical versions of these algorithms. Predictio... |

118 | Design Issues for Dynamic Voltage Scaling
- Burd, Brodersen
- 2000
(Show Context)
Citation Context ...head of changing speed and voltage In this paper, we have assumed that CPU speed and voltage transitions consume no time or energy. However, in reality, this is not the case. According to Burd et al. =-=[3]-=-, changing between two levels takes time roughly proportional to the voltage differential and energy roughly proportional to the difference between the squares of the voltages. So, if a schedule only ... |

102 |
Performance of a Software MPEG Video Decoder
- Patel, Smith, et al.
- 1993
(Show Context)
Citation Context ...e computers [6]. Therefore, we include a movie player as one of our workloads. We use the MPEG player included with the Berkeley MPEG Tools developed by the Berkeley Multimedia Research Center (BMRC) =-=[16]-=-. Since they provide full source code for their tool, we were easily able to instrument it to measure and output the CPU time taken for each frame. Thus, each task of the workload represents the proce... |

80 | Soft timers: Efficient microsecond software timer support for network processing
- Aron, Druschel
- 2000
(Show Context)
Citation Context ...ntervals to change the CPU speed. A CPU cycle counter or clock timer could generate such interrupts. Alternately, software could use soft timers, an operating system facility suggested by Aron et al. =-=[1]-=- that lets one schedule events for the next time one can be performed cheaply, such as when a system call begins or a hardware interrupt occurs. This could only work if these events occur sufficiently... |

76 | Scheduling Computer and Manufacturing Processes Springer Verlag (3rd edition - Blazewicz, Ecker, et al. - 1996 |

42 |
Smooth Tests of Goodness of Fit
- Rayner, Best
- 1989
(Show Context)
Citation Context ... of data, one can use that model to estimate the CDF at each data point, and test whether the set of CDF's is distributed uniformly over the interval (0; 1). For this uniformity test, Rayner and Best =-=[19]-=- recommend Neyman's 2 4 test. Applying this test to any of our workloads, using any of our sampling methods, the test reveals an extremely small probability that the data fit either the normal or gamm... |

25 | The Impact of Battery Capacity and Memory Bandwidth on CPU Speed-setting: A Case Study
- Martin, Siewiorek
- 1999
(Show Context)
Citation Context ...rve is better approximated by a linear curve. We believe that most of our results will still hold. Memory effects can also produce a nonlinear speed-voltage relationship, as observed by Martin et al. =-=[15]-=-. When memory speed does not scale precisely with CPU speed, the work completion rate may be nonlinearly related to voltage for some voltage ranges. In future work, we should explore the effect of thi... |

21 |
The technology behind Crusoe TM processors
- Klaiber
- 2000
(Show Context)
Citation Context ... any given task by a certain time; it is best to simply measure the average number of non-idle cycles per second and run the CPU at that speed. (Transmeta's LongRun TM system does something like this =-=[10]-=-.) Pering et al., recognizing this, suggested considering deadlines when evaluating DVS algorithms [17]. To do so, they suggest considering a task that completes before its deadline to effectively com... |

20 |
The vtrace tool: Building a system tracer for windows nt and windows 2000
- Lorch, Smith
- 2000
(Show Context)
Citation Context ...e algorithms using six workloads. We derived most workloads from traces of users performing their normal business on desktop machines running Windows NT or Windows 2000. VTrace, a tracer described in =-=[13]-=-, generated these traces. The traces contain timestamped records describing events related to processes, threads, messages, disk operations, network operations, the keyboard, and the mouse. We deduce ... |

15 | Energy consumption of apple macintosh computers
- Lorch, Smith
- 1998
(Show Context)
Citation Context ...actors limit the utility of trading performance for energy savings. First, a user wants the performance for which he paid. Second, other components, such as the disk and backlight, also consume power =-=[12]-=-. If they stay on longer because the CPU runs more slowly, the overall effect can be worse performance and increasedsenergy consumption. Thus, one should reduce the voltage only when it will not notic... |

15 |
A new approach to dynamic voltage scaling
- PACE
- 2004
(Show Context)
Citation Context ...stant as possible. We achieve this by making s(w) be the valid speed closest to C[F c (w)] 1=3 , where C is a constant chosen to satisfy the deadline constraint. For a full proof that this works, see =-=[14]-=-. Since F c (w) decreases as w increases, this schedule speeds up the CPU as the task progresses, as noted earlier. Given any scheduling algorithm, it is worthwhile to replace its pre-deadline part wi... |

14 | Scheduling Computer and Manufacturing Processes - Bła˙zewicz, Ecker, et al. - 1996 |

10 |
TraceDriven Modeling and Analysis of CPU Scheduling in a Multiprogramming System
- SHERMAN, BASKETT, et al.
- 1972
(Show Context)
Citation Context ...adlines. However, one can only compute such optimal schedules if the tasks' CPU requirements are known in advance, and task requirements in most systems are unpredictable random variables; see, e.g., =-=[20]-=-. For this reason, most research on scheduling for DVS has focused on heuristics for estimating CPU requirements and attempting to keep CPU speed as constant as possible. Weiser et al. [23] recommende... |

6 | Continuous univariate distributions I: Distributions in statistics - Johnson, Kotz - 1970 |