### Table 2.1: /inet Special File Components In general, TCP is the preferred mechanism to use. It is the simplest protocol to understand and to use. Use the others only if circumstances demand low-overhead.

2004

### Table 4.1: Parameters used in 2-D wave equation problem The results of two runs are presented here. The solutions and grids of the 2-level case are shown in Figures 4.5 to 4.10 for every 60 coarse time steps. The top halves of these gures are the solutions plotted on a square domain [?1; 1] [?1; 1]. The 2-level grids are plotted on the bottom halves of the gures. The dark areas are the ne grids. We see that there are two more wave fronts generated by the cape after the initial wave re ects from the boundary. There are usually two ways to measure the e ciency of CAG methods. One is the speedup which is the ratio between the time using a uniform ne mesh and the one using CAG methods. The other one is the overhead of CAG which is the percentage ratio between the time spent on things other than integrating the solution and the total running time. However, we have to be very careful when using these two criteria, since they are highly problem dependent. For example, our CAG methods are very attractive, i.e., large speedup and low overhead, for the problems which have very small re ned regions with rectangular shape. For the purpose of testing e ciency,

### Table 4: Uniform Incremental Timings (seconds) for some of the larger input sizes, the jrat analysis generates trace files of several gigabytes, whereas the adaptive analysis, as an online checker, simply delivers the boolean analysis verdict. (2) Adaptive analysis appears to have a similar rate of growth as the un-instrumented program. Clearly there is some initial startup overhead incurred by adaptive analysis, but the gap in performance does not widen as the program input size increases. This bodes well for considering adaptive analyses as candidates to be deployed in fielded systems, since their overhead appears negligible once the system is initialized. These results can only be considered preliminary evidence on the cost-effectiveness of adaptive online program analysis, but we believe they are a strong indicator that low-overhead dynamic analysis of stateful properties can be achieved.

2006

"... In PAGE 14: ... The performance of these applications is dominated by the time to perform the XML parsing, which causes the overhead of checking NanoXML APIs to appear larger than it would for applications that performed significant add:qitional computation. Table4 reports the time cost, at the 6th data point, of different analysis techniques for pairs of application and prop- erty. In addition to the pb and sbp properties described above, we check a precedence property for IXMLBuilder instances, called SetBuilder Before StartElement AddAttribute (sbbsa), and a constrained-response property relat- ing IXMLReader and IXMLParser, called Parser Reader (pr).... ..."

### Table 4: Uniform Incremental Timings (seconds) for some of the larger input sizes, the jrat analysis generates trace files of several gigabytes, whereas the adaptive analysis, as an online checker, simply delivers the boolean analysis verdict. (2) Adaptive analysis appears to have a similar rate of growth as the un-instrumented program. Clearly there is some initial startup overhead incurred by adaptive analysis, but the gap in performance does not widen as the program input size increases. This bodes well for considering adaptive analyses as candidates to be deployed in fielded systems, since their overhead appears negligible once the system is initialized. These results can only be considered preliminary evidence on the cost-effectiveness of adaptive online program analysis, but we believe they are a strong indicator that low-overhead dynamic analysis of stateful properties can be achieved.

2006

"... In PAGE 14: ... The performance of these applications is dominated by the time to perform the XML parsing, which causes the overhead of checking NanoXML APIs to appear larger than it would for applications that performed significant add:qitional computation. Table4 reports the time cost, at the 6th data point, of different analysis techniques for pairs of application and prop- erty. In addition to the pb and sbp properties described above, we check a precedence property for IXMLBuilder instances, called SetBuilder Before StartElement AddAttribute (sbbsa), and a constrained-response property relat- ing IXMLReader and IXMLParser, called Parser Reader (pr).... ..."

### Table VIII: Principal characteristics of the presented models. References [1] F. Afrati, C.H. Papadimitriou, G. Papageorgiou, Scheduling DAGs to minimize time and computation, Proc. of the Aegian Workshop on Computing (AWOC) (1988) 134- 138. [2] R.J. Anderson, P. Beame, w. Ruzzo, Low overhead parallel schedules for task graphs, Proc. SPAA (1990) 66-75.

1997

Cited by 6

### Table 3: Comparison of LIFO, CLIP, uncorked LIFO (LIFOU), uncorked CLIP (CLIPU), two- stage LIFO (LIFO2) and two-stage CLIP (CLIP2) partitioning algorithms on IBM test cases. Nodes were assigned varying (actual) weights. Solutions are constrained to be within 2% of bisection (partitions must contain between 49% and 51% of total). Data expressed as (average cut / average CPU time), with CPU seconds on measured on a 140MHz Sun Ultra-1.

2000

"... In PAGE 9: ... s and is also representative of important application contexts, e.g., VLSI placement. The results are presented in Table3 and suggest that two-stage tolerance relaxation indeed im- proves solution costs without considerably increasing runtime. As can be seen in 3, the LIFO2 and CLIP2 algorithms provide substantial improvements over even the \uncorked quot; LIFO and CLIP partitioners.... In PAGE 10: ... We propose easy- to-implement, low-overhead techniques to counteract the latter problem, and demonstrate notable improvements in solution quality. We speculate that the CLIP corking e ect was not diagnosed earlier because of the tendency to compare partitioners according to unit-area 6The results of PROP-REXest may be found in Table3 of [14].... ..."

Cited by 3

### Table 1. Datasets used in our experiments.

"... In PAGE 9: ... As we will see in the following section, this data structure can be built with a relatively low overhead with respect to the cost of discovering just the frequent itemsets (which is what we need to discover standard association rules). 5 Experimental results Table1 presents three of the datasets from the UCI Machine Learning Repos- itory we used to test ATBAR, as well as some information on the number of frequent patterns obtained for di erent minimum support thresholds. Since we were interested in detecting anomalies, we removed binary attributes from the original ADULT and CENSUS databases.... ..."

### Table 2: Performance results with different test flows. Test

"... In PAGE 3: ... This reading slightly deviates (6%) from the expected sum of 71mA when the processes run separately (lines 2-4). (a) (b) (c) Figure 3: Running Different Test Flows Table 1: Computing the Current Consumption of the Individual Processes Process mixture setting Overall current consumption Net process current SoC base-line 35 mA -- LED only 57 mA 22 mA DRAM only 68 mA 33 mA SRAM only 51 mA 16 mA DRAM + SRAM + LED 110 mA 75 mA Table2 summarizes the performance of our development system using the test flows of Fig. 3.... In PAGE 3: ... Fig. 3 and Table2 illustrate the viability of our approach to correlate power consumption data to the actual processes at a relatively high resolution. Even the low overhead of 30mS polling allows a detailed analysis for all the three different flows in Fig.... ..."

### Table 1. Symbolic Interpretation of Reachability Logic

1995

"... In PAGE 11: ...Table 1. Symbolic Interpretation of Reachability Logic To read the rules of Table1 some notation needs to be explained. For D aconstraintsystemandra set of variables (to be reset) r(D) denotes the set of variable assignments fr(v) j v 2 Dg.... In PAGE 12: ...directed graphs (with clock and data variables as nodes), these operations as well as testing for inclusion between constraint systems may be e ectively im- plemented in O(n2)andO(n3) using shortest path algorithms [11, 12, 6]. Now, by applying the proof rules of Table1 in a goal directed manner we obtain an algorithm (see also [13]) for deciding whether a given symbolic network con guration [l;D] satis es a property 93 . To ensure termination (and e ciency), we maintain a (past{) list L of the symbolic network con gurations encountered.... ..."

Cited by 117

### Table 1. Symbolic Interpretation of Reachability Logic

1995

"... In PAGE 9: ...Table 1. Symbolic Interpretation of Reachability Logic To read the rules of Table1 some notation needs to be explained. For D a constraint system and r a set of variables #28to be reset#29 r#28D#29 denotes the set of variable assignments fr#28v#29 j v 2 Dg.... In PAGE 10: ...directed graphs #28with clock and data variables as nodes#29, these operations as well as testing for inclusion between constraint systems may be e#0Bectively im- plemented in O#28n 2 #29andO#28n 3 #29 using shortest path algorithms #5B11, 12, 6#5D. Now, by applying the proof rules of Table1 in a goal directed manner we obtain an algorithm #28see also #5B13#5D#29 for deciding whether a given symbolic network con#0Cguration #5B l; D#5D satis#0Ces a property 93#0C.To ensure termination #28and e#0Eciency#29, we maintain a #28past#7B#29 list L of the symbolic network con#0Cgurations encountered.... ..."

Cited by 9