### Table 1: Multi-level process of emotions vs. Hybrid reactive/deliberative (From Murphy, Lisetti, et al., 2002).

"... In PAGE 3: ..., 2002) it furthermore matches closely hybrid/reactive deliberative architectures for robotic agents. Table1 shows that relationship. Table 1: Multi-level process of emotions vs.... ..."

### Table 3: Time in seconds when searching for 128 solutions; varying machines. The second example in Table 3 is searching for multiple solutions in a dense solution space. (That is, looking for 128 isomorphisms with solution space density 10?5). In this case, there are 4The KSR 1 has a multi-level ring architecture, but all our tests were done on a single ring. 5We consider di erences in running time signi cant if the slower version takes at least 25% more time than the faster version.

1994

"... In PAGE 14: ... This shows that the di erences we are noting are signi cant; there are multiple machines in each category. The rst example in Table3 is searching for multiple solutions in a sparse solution space. (Speci cally, looking for 128 isomorphisms in a solution space with density 10?21).... In PAGE 15: ... The two parallelizations are a toss-up on the remaining two machines. The rst two lines in Table3 show that in a sparse solution space, loop parallelism outperforms tree parallelism on the Iris and 8CE, while tree parallelism outperforms loop parallelism on the KSR1, TC2000, Symmetry, and Butter y; the two parallelizations are comparable (within 6%) on the Balance. This result is somewhat surprising, given that the Balance, Symmetry, and Iris have such similar architectures (all are coherent, bus-based machines).... In PAGE 16: ... The gure shows that increasing processors for this problem continues to yield signi cant bene ts beyond 8 processors; as a result, the KSR1 is able to exploit its larger number of processors to advantage and outperform the Iris. We can extend these observations to all of the machines in Table3 . On machines with large numbers of processors (P 32 | KSR1 and Butter y), tree parallelism does much better than loop parallelism.... In PAGE 16: ...ecomes less e ective relative to tree parallelism. As discussed in Section 2.5, this occurs because only loop parallelism incurs any signi cant communication. The second set of data in Table3 con rms our expectation that the tradeo between loop and tree parallelism shifts in favor of loop parallelism when the solution space is very dense. When the solution space density is 10?5, the variance in search time among di erent subtrees is so small that tree parallelism is preferable only on machines with large numbers of processors (P 32).... ..."

Cited by 6

### Table 2. Statistics of the multi-level preconditioner

"... In PAGE 5: ... In the table, the 5th and 6th columns indicate the total number of Newton iterations and Krylov iterations used in the Newton loop and by the GMRES solver, respectively, before the simulation convergence is reached. The performance of the proposed multi-level preconditioner is summarized in Table2 on the same set of designs, where the total number of Krylov iterations corresponds to that is used by the top- level FGMRES solver. Different from the previous experiments, we have adopted a multi-level structure where the largest sub- problem size on the next level is approximately one fourth of that on the current level.... ..."

### Table 4. Multi-Level Threshold Results

"... In PAGE 7: ... Multi-Level Threshold Results Threshold Level 4 3 2 1 Value #28in Meters#29 30 18 8 2 Table 5. Multi-Level Threshold Values Table4 shows the number of PDUs generated and the average error in AOI and SR when our multi- level threshold dead reckoning algorithm is used. The threshold values used in di#0Berent levels are listed in Table 5.... In PAGE 7: ... The threshold values used in di#0Berent levels are listed in Table 5. It can be seen from Table4 that there is a great reduction in the average error in SR, compared to the average error in AOI. In our algorithm, if entity A is in entity B apos;s SR, a minimum threshold will be used in the dead reckoning so that B will receive A apos;s update packets most frequently.... ..."

### Table 5: MULTI-LEVEL MODELS

1998

"... In PAGE 28: ...xist with more time periods. To date, they are far from being solved. Computation on Multi-Level Instances Results for the ML-G instances are presented in Table 5. The results in Table5 show that at least on these simple academic models bc ? prod typically dominates bc ? opt and mp ? opt. This is due to the automatic conversion to an echelon stock formulation in combination with the path inequalities.... ..."

Cited by 4

### Table 2. Statistics of multi-level preconditioner

"... In PAGE 5: ... In the table, the 5th and 6th columns indicate the total number of Krylov iterations used in the Newton loop and by the GMRES solver, respectively, before the convergence is reached. The performance of the proposed multi-level preconditioner is summarized in Table2 . on the same set of designs, where the total number of Krylov iterations corresponds to that is used by the top- level FGMRES solver.... ..."

### Table 2. Multi-level approach to workplace studies.

2002

"... In PAGE 7: ... This addresses the widely recognized problem of ethnographic approaches that, while they can provide an understanding of current work practices, they are not intended to explore the consequences of socio-technical change. Table2 shows a multi-level structure for workplace studies, with level 1 consisting of a survey of the existing organizational structures and schedules, levels 2 and 3 providing an analysis of situated practices and interactions of those for whom the technology is intended, and level 4 offering a synthesis of the findings in terms of designs for new socio-technical systems. The four levels of the approach give an overview of workplace activity leading to more detailed investigation of particular problem areas, with each level illuminating the situated practices, and also providing a set of issues to be addressed for the next level.... ..."

Cited by 9

### Table 1: Results from ISCAS Multi-level Examples

1992

Cited by 4

### Table 4: Power consumption for multi-level logic.

1994

"... In PAGE 21: ... The encoded machines were synthesized and mapped using the sis package and a gate library from industry. Table4 summarizes the results. Columns 3 and 5 give the % power reduction of MWHD and LPSA over JEDI, respectively.... In PAGE 21: ...ackage and a gate library from industry.Table 4 summarizes the results. Columns 3 and 5 give the % power reduction of MWHD and LPSA over JEDI, respectively. Table4 shows that in general the lowpower state assignment produces better results than JEDI and the MWHD encoding. An average of 14.... ..."

Cited by 2