### Table 4: Priority Scheduling Structure.

"... In PAGE 32: ...vents. CPU reservations are organized in rounds. A round (or dispatch table) contains a number of time slots which are assigned to hard and soft real-time tasks4. The frame- work has been realized by a two-level scheduling model implemented in Windows CE, as shown in Table4 .... ..."

### Table 1. Analysis times, leak warnings statistics, and leak messages classification. Only cases in bold are reported as high priority errors.

"... In PAGE 8: ...he ssh daemon sshd-4.3p2-4. 6.1 Analysis times The left portion of Table1 presents the analysis times for each ap- plication. This includes the time for building the value-flow graph and performing the reachability analysis.... In PAGE 8: ...he fixed bound; or, when the cell is simply not local, i.e., its address is stored in a structure, array or global variable. The right portion of Table1 summarizes the results for each application. The leak messages section shows the total number of source allocations and the number of warnings (total, true, and false warnings) issued by the tool in the standard mode of operation, where only the high-priority errors are reported.... ..."

### Table2: Simulation Performance

2003

"... In PAGE 5: ... As for the operation rate of ARM, we calculated it using the number of iterations of Dhrystone. As for the operation rate of GE/DCU, we calculated the simulation cycle counts in Table2 which GE and DCU required in order to process data of one frame size. In this way, we could find out the quantitative performance of thirteen different architectures for one of the candidates of hardware/software partitioning by changing the number of bus layers, arbitration scheme, memory location as a software strategy or adding a memory for GE and DCU.... In PAGE 6: ...4GHz, RDRAM 512MB, LINUX) for this measurement. We show the results in Table2 . In Table 2, Simulation Cycle Count is the number of cycles that GE and DCU require to process data of one frame size, and Simulation Time is CPU time, which the simulation requires.... ..."

Cited by 4

### Table 1 Timings of the Dynamic Analysis on Intel iPSC/860 for TIP3P BLOCK partitioning

1993

"... In PAGE 3: ... We used the con guration TIP3P (TIP4P -4-site water model of Jorgenson) which contains 216 water molecules (648 atoms) in a box. Table1 shows the timings that were obtained for 4 to 32 processors. In this case the atoms were partitioned by simple blocking.... ..."

Cited by 19

### Table 2: The scheduler class priorities Priority number Service class

"... In PAGE 17: .... Determination of the exact time slot allocated for each request. For the first action, the algorithm combines priorities with a leaky bucket regulator. Each connection gets a priority based on its Service Class as indicated in Table2 . The greater the priority number, the higher the priority of a connection.... ..."

### Table 1 describes a simple 4 task system, together with the worst-case response times that are calculated by equation (2). Priorities are ordered from 1, with 4 being the lowest value, and blocking times have been set to zero for simplicity. Scheduling analysis is independent of time units and hence simple integer values are used (they can be interpreted as milliseconds).

2003

"... In PAGE 3: ... Table1 . Example Task Set All tasks are released at time 0.... ..."

Cited by 6

### Table 1: Experimental results comparing the three voltage scaling approaches for tasks with fixed priorities. Worst. Exec Norm. Exec

"... In PAGE 5: ....7 into intervals of length 0.1. To reduce statistical errors, the number of task sets with utilization values within each interval is no less than 20, and the average results are collected in Table1 . Fur- thermore, we apply our approach to two real-world applications: CNC (Computerized Numerical Control) machine controller [10] and INS (Inertial Navigation System) [1], and the results are shown in Table 2.... In PAGE 5: ... Similar to that in [12], the execu- tion time for a task is assumed to be normally distributed within its best and worst case execution time, and we assume that best case execution time (BCET) for each task is half of its WCET. In Table1 and Table 2, columns , , and represent the power consumption by algorithms VSLP, LPFS, and LPPS, respectively. To better present our results, in Table 1, we let be 100 and normalize the other two correspondingly.... In PAGE 5: ... In Table 1 and Table 2, columns , , and represent the power consumption by algorithms VSLP, LPFS, and LPPS, respectively. To better present our results, in Table1 , we let be 100 and normalize the other two correspondingly. Column Worst.... In PAGE 5: ... Exec represents the cases when the task execution times are normally distributed. From the statistical data shown in Table1 , one can readily con- clude that our voltage scheduling strategy leads to more energy sav- ing than the other two approaches, regardless of whether the job execution times are equal to or less than their WCETs. The reason for this is that: when the ready queue is not empty, LPFS always uses the full speed to execute the jobs; LPPS is more efficient and uses the lowest maximum constant speed; in our approach, even lower speed is possible according to the voltage schedule obtained by Algorithm 2.... In PAGE 5: ... The reason for this is that: when the ready queue is not empty, LPFS always uses the full speed to execute the jobs; LPPS is more efficient and uses the lowest maximum constant speed; in our approach, even lower speed is possible according to the voltage schedule obtained by Algorithm 2. Moreover, note that in Table1 , our algorithm can save more energy when the processor utilization is lower. This con- forms with the following intuition: when the processor utilization is low, our algorithm tends to find a constant speed which can be applied to relatively long intervals while still meet the deadline re- quirements for the jobs.... ..."

### Table 1: Experimental results comparing the three voltage scaling approaches for tasks with fixed priorities. Worst. Exec Norm. Exec

"... In PAGE 5: ....7 into intervals of length 0.1. To reduce statistical errors, the number of task sets with utilization values within each interval is no less than 20, and the average results are collected in Table1 . Fur- thermore, we apply our approach to two real-world applications: CNC (Computerized Numerical Control) machine controller [10] and INS (Inertial Navigation System) [1], and the results are shown in Table 2.... In PAGE 5: ... Similar to that in [12], the execu- tion time for a task is assumed to be normally distributed within its best and worst case execution time, and we assume that best case execution time (BCET) for each task is half of its WCET. In Table1 and Table 2, columns , , and represent the power consumption by algorithms VSLP, LPFS, and LPPS, respectively. To better present our results, in Table 1, we let be 100 and normalize the other two correspondingly.... In PAGE 5: ... In Table 1 and Table 2, columns , , and represent the power consumption by algorithms VSLP, LPFS, and LPPS, respectively. To better present our results, in Table1 , we let be 100 and normalize the other two correspondingly. Column Worst.... In PAGE 5: ... Exec represents the cases when the task execution times are normally distributed. From the statistical data shown in Table1 , one can readily con- clude that our voltage scheduling strategy leads to more energy sav- ing than the other two approaches, regardless of whether the job execution times are equal to or less than their WCETs. The reason for this is that: when the ready queue is not empty, LPFS always uses the full speed to execute the jobs; LPPS is more efficient and uses the lowest maximum constant speed; in our approach, even lower speed is possible according to the voltage schedule obtained by Algorithm 2.... In PAGE 5: ... The reason for this is that: when the ready queue is not empty, LPFS always uses the full speed to execute the jobs; LPPS is more efficient and uses the lowest maximum constant speed; in our approach, even lower speed is possible according to the voltage schedule obtained by Algorithm 2. Moreover, note that in Table1 , our algorithm can save more energy when the processor utilization is lower. This con- forms with the following intuition: when the processor utilization is low, our algorithm tends to find a constant speed which can be applied to relatively long intervals while still meet the deadline re- quirements for the jobs.... ..."

### Table 1: Experimental results comparing the three voltage scaling approaches for tasks with fixed priorities. Worst. Exec Norm. Exec

"... In PAGE 5: ....7 into intervals of length 0.1. To reduce statistical errors, the number of task sets with utilization values within each interval is no less than 20, and the average results are collected in Table1 . Fur- thermore, we apply our approach to two real-world applications: CNC (Computerized Numerical Control) machine controller [10] and INS (Inertial Navigation System) [1], and the results are shown in Table 2.... In PAGE 5: ... Similar to that in [12], the execu- tion time for a task is assumed to be normally distributed within its best and worst case execution time, and we assume that best case execution time (BCET) for each task is half of its WCET. In Table1 and Table 2, columns , , and represent the power consumption by algorithms VSLP, LPFS, and LPPS, respectively. To better present our results, in Table 1, we let be 100 and normalize the other two correspondingly.... In PAGE 5: ... In Table 1 and Table 2, columns , , and represent the power consumption by algorithms VSLP, LPFS, and LPPS, respectively. To better present our results, in Table1 , we let be 100 and normalize the other two correspondingly. Column Worst.... In PAGE 5: ... Exec represents the cases when the task execution times are normally distributed. From the statistical data shown in Table1 , one can readily con- clude that our voltage scheduling strategy leads to more energy sav- ing than the other two approaches, regardless of whether the job execution times are equal to or less than their WCETs. The reason for this is that: when the ready queue is not empty, LPFS always uses the full speed to execute the jobs; LPPS is more efficient and uses the lowest maximum constant speed; in our approach, even lower speed is possible according to the voltage schedule obtained by Algorithm 2.... In PAGE 5: ... The reason for this is that: when the ready queue is not empty, LPFS always uses the full speed to execute the jobs; LPPS is more efficient and uses the lowest maximum constant speed; in our approach, even lower speed is possible according to the voltage schedule obtained by Algorithm 2. Moreover, note that in Table1 , our algorithm can save more energy when the processor utilization is lower. This con- forms with the following intuition: when the processor utilization is low, our algorithm tends to find a constant speed which can be applied to relatively long intervals while still meet the deadline re- quirements for the jobs.... ..."

### Table 4 Analysis of schedules from 10 runs

2003

"... In PAGE 11: ...2 65856 NL12 107483 118047 120528.8 123770 resulting distances from the 10 runs are analyzed in Table4 where the gap percentage is computed by ( Distance Lower bound C0 1) for each instance. The quality of the schedules is comparable to those results gi- ven in [15] at the time of writing (June 28, 2004).... ..."