### Table 2: Characterization, by program version, of the typical solution effort for the tasks (the task used in PRIOREXP-1b and -1r and the Cartwright replication are similar to that of PRIOREXP- 1a). Investigated methods: number of methods that must be analyzed and understood for solving the problem. Hierarchy changes: number of times one must switch to a subclass or superclass during that method understanding process if it is performed by dynamic tracing (i.e. following the execution call sequence). Solution methods: number of methods that are copied from existing classes (verbatim or with changes) or modified in order to create the solution. No methods needed to be written from scratch.

1999

"... In PAGE 4: ... In contrast, our tasks require understanding of classes with different internal structure and imply less straightforward extensions. The size of the tasks was also larger; see Table2 and Section 3.5 for details.... In PAGE 14: ... They depend not only on the program but also on the task and on the behavior of the pro- grammer attempting the program understanding. But if the task is known and if one is willing to assume a certain reasonable default strategy is used to gain understanding, we can compute values for these properties, as shown in Table2 . We consider as input variables the number of investigated methods and number of required hierarchy changes.... ..."

Cited by 7

### Table 2: Characterization, by program version, of the typical solution effort for the tasks (the task used in PRIOREXP-1b and -1r and the Cartwright replication are similar to that of PRIOREXP- 1a). Investigated methods: number of methods that must be analyzed and understood for solving the problem. Hierarchy changes: number of times one must switch to a subclass or superclass during that method understanding process if it is performed by dynamic tracing (i.e. following the execution call sequence). Solution methods: number of methods that are copied from existing classes (verbatim or with changes) or modified in order to create the solution. No methods needed to be written from scratch.

1999

"... In PAGE 4: ... In contrast, our tasks require understanding of classes with different internal structure and imply less straightforward extensions. The size of the tasks was also larger; see Table2 and Section 3.5 for details.... In PAGE 14: ... They depend not only on the program but also on the task and on the behavior of the pro- grammer attempting the program understanding. But if the task is known and if one is willing to assume a certain reasonable default strategy is used to gain understanding, we can compute values for these properties, as shown in Table2 . We consider as input variables the number of investigated methods and number of required hierarchy changes.... ..."

Cited by 7

### Table 2: Characterization, by program version, of the typical solution effort for the tasks (the task used in PRIOREXP-1b and -1r and the Cartwright replication are similar to that of PRIOREXP- 1a). Investigated methods: number of methods that must be analyzed and understood for solving the problem. Hierarchy changes: number of times one must switch to a subclass or superclass during the method understanding process if it is performed by dynamic tracing (i.e. following the execution call sequence). Solution methods: number of methods that are copied from existing classes (verbatim or with changes) or modified in order to create the solution. No methods needed to be written from scratch.

1999

"... In PAGE 4: ... In contrast, our tasks required understanding of classes with different internal structure and less straightforward extensions. The sizes of the tasks were also larger; see Table2 and Section 3.... In PAGE 14: ... They depend not only on the program but also on the task and on the behavior of the pro- grammer attempting the program understanding. But if the task is known and if one is willing to assume a certain reasonable default strategy is used to gain understanding, we can compute values for these properties, as shown in Table2 . We consider as input variables the number of investigated methods and number of required hierarchy changes.... ..."

Cited by 7

### Table 1: Comparison of performance of DLM-99-SAT for solving some hard SAT problems and the g-class problems that (Shang amp; Wah 1998) did not solve well before. (All our experiments were run on a Pentinum Pro 200 computer with Linux. WalkSAT/GSAT experiments were run on an SGI Challenge with MPIS processor, model unknown. \NA quot; in the last two columns stands for \not available. quot;)

"... In PAGE 5: ... Due to space limitation, we will not present the details of these experiments here. Table1 lists the experimental results on all the hard problems solved by DLM-99-SAT and the experimen- tal results from WalkSAT and GSAT. It lists the CPU times of our current implementation on a Pentium-Pro 200 MHz Linux computer, the number of (machine in- 0 200000 400000 600000 800000 1e+06 1.... In PAGE 5: ... Note that hanoi4, hanoi4-simple and par32- problems are much harder than problems in the par16 and f classes because the number of ips is much larger. Table1 also lists the results of applying DLM-99- SAT to solve the g-class problems that were not solved well by (Shang amp; Wah 1998). The number of ips used for solving these problems indicate that they are much easier than problems in the par16 class.... ..."

### Table 5. Hence, we can compute the similarity function for mst-lists using a function analogous to T rajSim.

"... In PAGE 20: ...TC EQ IN CB CT CV OL DJ 0 1 6 4 5 4 5 4 TC 1 0 5 5 4 5 4 3 EQ 6 5 0 4 3 4 3 6 IN 4 5 4 0 1 6 7 4 CB 5 4 3 1 0 7 6 3 CT 4 5 4 6 7 0 1 4 CV 5 4 3 7 6 1 0 3 OL 4 3 6 4 3 4 3 0 Table5 : Distances of Topological Relations (Table 1 in [EAT92]) 4 m+7 m since the maximum distance value of a topological relations is 7. An mst-list similarity function is de ned as MstSim(A; B) = maxDiff(A; B) ? minDiff(A; B) maxDiff(A; B) : We do not consider the time intervals during the match because it is less important than others.... ..."

### Table 2: The Ecological Inference Problem at the National Level: July, 1932

2007

"... In PAGE 20: ...Within each of the six groups of precincts defined by unemployment and religion, the key substantive issue is filling in a cross-classification like that in Table2 . The rightmost column of the table gives the proportion of people in each occupational group, whereas the last row indicates the proportions of individuals who cast their ballots for each of our political party groupings.... In PAGE 20: ... The rightmost column of the table gives the proportion of people in each occupational group, whereas the last row indicates the proportions of individuals who cast their ballots for each of our political party groupings. [ Table2 about here.] While the margins of this table are observed, and the margins of analogous tables like it are observed for each precinct, the cells in the table (denoted by question marks) are not known and must be estimated.... In PAGE 22: ... 68 The same venerable methods also dominated other fields that made ecological inferences until King69 showed how to combine both sources of information in the same model.70 A variety of other methods have subsequently been proposed that also combine both sources of information,71 but few apply to tables as large as in Table2 and none are used much in applications. As such, we developed new techniques to study voting in Weimar Germany that extend this approach to combining deterministic and statistical information in the same model in a way that works for arbitrarily large tables.... ..."

### Table 2: Comparing Posterior Inference in the Random E ects Models

"... In PAGE 18: ... (This notion of a predictive distribution is discussed at length below.) We have found that inference for the key parameters, and predictive distributions, are virtually identical to results using the normal random e ects prior (see for example Table2 , where the \Nor- mal quot; and \NP Random E ects quot; estimates are nearly indistinguishable), although the estimated distribution for i in the more general model is nonnormal. We therefore consider an alternative, and potentially more important, generalization of the parametric random e ects model.... In PAGE 19: ... Thus the nonparametric deconvolution problem is solved implicitly via latent variable augmentation and conditioning. Results for AR parameters are summarized in Table2 , in rows indicated by \NP Disturbances. quot; Estimates are similar to the previous two models for the high school dropouts subsample, but quite di erent for the high school and college graduates subsamples.... In PAGE 26: ... Table 3 summarizes inference for some of the key parameters in the correlated random e ects models. In general the posterior means for the autoregressive coe cients are lower than under the more restrictive independent random e ects models; compare Table2 with Table 3. There is still a marked contrast between the parametric and semiparametric estimates, especially for the higher education groups.... ..."

### Table 1. Area overhead costs for all combinations of wrapper sharing. the test schedule. Thus, we constrain the TAM optimization procedure such that the tests for cores sharing the same wrapper are scheduled serially in time. In this way, the total test time usage of the test wrapper is the sum of the test times of the analog cores that share the wrapper. A lower bound on the overall test time of all the analog cores can now be calculated as the maximum of the usage of every analog test wrapper, i.e., if three analog test wrappers are used to test all the analog cores, then a lower bound a72a74a73a6a75

2005

"... In PAGE 3: ....e., if three analog test wrappers are used to test all the analog cores, then a lower bound a72a74a73a6a75 on the test time is the maximum of the test time usage of the three analog test wrappers. Table1 shows the a58 a13 values for all the combinations of sharing between the ve analog cores considered in the experimental setup. The normalized lower bound for each case is also presented; these values have been normalized to the maximum lower bound.... In PAGE 3: ... Similarly if a79 a13 a100 a79a82a81 , the degree of sharing is chosen such that the area overhead minimization has priority over test time minimization. One approach for solving problem a91a35a101a33a102a104a103a27a105 is to evaluate the over- all cost a12 a81 for every possible con guration of shared analog wrapper (as presented in Table1 ) for a given TAM width a78 and weights a79a99a81 and a79 a13 . This exhaustive approach requires the TAM optimization pro- cedure to be run for every combination of analog cores to obtain the a12 a81 a24 a78 a32 values.... In PAGE 6: ... (Note that while exhaustive enumeration is possible for these test cases, the high CPU time notwithstanding, it is unlikely to be feasible for larger SOCs.) The a58 a13 values used are the same as those presented in Table1 . And the elimination criteria a178 for the a58 a1a14a2 a72 a4 a53 a72a33a48a132a10a50a48a6a5a144a177a8a7 approach is chosen to be zero.... ..."

Cited by 1

### Table 1. Area overhead costs for all combinations of wrapper sharing. the test schedule. Thus, we constrain the TAM optimization procedure such that the tests for cores sharing the same wrapper are scheduled serially in time. In this way, the total test time usage of the test wrapper is the sum of the test times of the analog cores that share the wrapper. A lower bound on the overall test time of all the analog cores can now be calculated as the maximum of the usage of every analog test wrapper, i.e., if three analog test wrappers are used to test all the analog cores, then a lower bound a72 a73 a75

2005

"... In PAGE 3: ....e., if three analog test wrappers are used to test all the analog cores, then a lower bound a72 a73 a75 on the test time is the maximum of the test time usage of the three analog test wrappers. Table1 shows the a58 a13 values for all the combinations of sharing between the five analog cores considered in the experimental setup. The normalized lower bound for each case is also presented; these values have been normalized to the maximum lower bound.... In PAGE 3: ... Similarly if a79 a13 a100 a79 a81 , the degree of sharing is chosen such that the area overhead minimization has priority over test time minimization. One approach for solving problem a91 a101 a102 a103 a105 is to evaluate the over- all cost a12 a81 for every possible configuration of shared analog wrapper (as presented in Table1 ) for a given TAM width a78 and weights a79 a81 and a79 a13 . This exhaustive approach requires the TAM optimization pro- cedure to be run for every combination of analog cores to obtain the a12 a81 a24 a78 a32 values.... In PAGE 6: ... (Note that while exhaustive enumeration is possible for these test cases, the high CPU time notwithstanding, it is unlikely to be feasible for larger SOCs.) The a58 a13 values used are the same as those presented in Table1 . And the elimination criteria a178 for the a58 a1 a2 a72 a4 a53 a72 a48 a10 a48 a5 a177 a7 approach is chosen to be zero.... ..."

Cited by 1

### Table 2: Linear model estimation

2006

"... In PAGE 9: ...Computational experience (MIPLIB instances) 3 OUR METHOD Table2 compares the size of the measurement tree obtained by the linear model with the actual number of nodes in T. The last column shows the ratio between the two.... ..."

Cited by 2