### Table 1: MapReduce jobs run in August 2004

"... In PAGE 10: ... At the end of each job, the MapReduce library logs statistics about the computational resources used by the job. In Table1 , we show some statistics for a subset of MapReduce jobs run at Google in August 2004. 6.... ..."

### Table 1: Partial and final results of the MapReduce operation

2006

### Table 1: Partial and final results of the MapReduce operation

2006

### Table I. MapReduce Statistics for Different Months.

### lable by any other priority assignment it is also schedulable by the deadline driven algorithm.[?] 3.4 A Mixed Scheduling Algorithm This mixed scheduling algorithm is a combination of rate monotonic and deadline driven scheduling algorithms.Let there be m tasks. Let the tasks 1; 2; : : :; k, the k tasks of shortest periods be scheduled according to rate monotonic scheduling algorithm, and let the remaining tasks k+1; k+2; : : :; m be scheduled according to deadline driven scheduling algorithm. Let (t) be a nondecreasing function of t. (t) is called sublinear if for all t and T (T) (t + T) ? (t) The availability function of a processor for a set of tasks is de ned as the accumulated processor time from 0 to t available to this set of tasks. Let k(t) denote the availability of processor for tasks k + 1; k + 2; : : :; m which is sublinear. a necessary and su cient condition for the feasibility of the deadline driven scheduling algorithm with respect to a processor with a sublinear availability function k(t) is

### Table 1 summarizes the parameters and their default values in the configuration of Mars. These parameters are similar to those in the existing CPU-based MapReduce framework [23]. All these parameter values can be specified by the developer for

"... In PAGE 5: ... This ensures that the output of the Reduce stage is sorted by the key. Table1 . The configuration parameters of Mars.... ..."

### Table 3. Taskset with tight deadlines.

"... In PAGE 5: ... Compared to the energy con- sumption of the taskset with all devices in the powered up state at all times (1125 units), this results in energy savings of almost 50%. Table3 is an example of a taskset that is more I/O- intensive. Figures 7 and 8 show the corresponding task and device schedules.... ..."

### Table 2. Complexity of the Scheduling Algorithm.

2005

"... In PAGE 3: ... The last one is RLS filter [8] which is the only benchmark using DIV operations. Complexity of the scheduling algorithm is summarized in Table2 where n is the number of tasks and m denotes the number of dedicated processors (arithmetic units). The column size denotes number of ILP variables/constraints.... In PAGE 3: ... The time required to compute the opti- mum, given as a sum of iterative calls of the ILP solver, is shown in the column CPU time. As follows from Table2 , the optimal solution for all benchmarks were found by GLPK solver in a reasonable... ..."

Cited by 1

### Table 1. Energy savings of Scheduling2D over baseline Task graph # of tasks # of edges 70 nm 50 nm 35 nm

2007

"... In PAGE 11: ... Note that the baseline algorithm takes T as its deadline constraint because T lt; D. Results: Table1 shows the energy savings for all experi- 3In some of the experiments, the convexprogram solver (called ipopt) that we used could not produce a solution due to its iteration limit. When that happened,we simply used S-SPM [13] in its place.... In PAGE 11: ... This is because high static power will force both algorithms to use fewer number of processor cores, and thus the optimization space is reduced. The aver- age energy savings in Table1 are conservative in the sense that their values are lowered by some cases that cannot be optimized by Scheduling2D. We analyzed the experimental results and observed that for a given task graph, as the pe- riod increases, the energy saving decreases.... ..."

Cited by 1