### Table 2: System Parameters of MQAM with space-time transmit diversity

"... In PAGE 3: ...able 1: System Parameters of BPSK with space-time transmit diversity......................40 Table2 : System Parameters of MQAM with space-time transmit diversity.... In PAGE 40: ... BER performance of MQAM with or without STTD under Rayleigh fading channel is presented in [46]. Table2 shows the system parameters that are used here ... ..."

### Table 1: Optimum q-state 2 b/s/Hz 4-PSK space-time codes (ePmin from [6]).

2002

"... In PAGE 8: ... As previously discussed q = 2(Q?1)R with Q = 2 or 3 if the number of states is 4 or 16, while [9] q = 2(Q?2)R+1 with Q = 2 or 3 if the number of states is 8 or 32. Our results for this case are summarized in Table1 . Besides providing ; AP (L) and CP (L), for convenience we provide the \e ective product distance quot;, ePmin, from [6] in Table 1.... In PAGE 8: ... Our results for this case are summarized in Table 1. Besides providing ; AP (L) and CP (L), for convenience we provide the \e ective product distance quot;, ePmin, from [6] in Table1 . To help the reader understand the process taken to arrive at Table 1, we describe some of the details.... In PAGE 8: ... Besides providing ; AP (L) and CP (L), for convenience we provide the \e ective product distance quot;, ePmin, from [6] in Table 1. To help the reader understand the process taken to arrive at Table1 , we describe some of the details. In particular, we found that there are 1840 di erent 4-state codes (not counting4 permutations of the columns of G) which satis ed the su cient conditions for maximum diversity gain while simultaneously providing the highest coding gain, = 2, of any codes satisfying the su cient conditions for maximum diversity gain.... In PAGE 8: ... The 4 state case was atypical in this respect, it being the only case where we could not nd optimum codes, with the largest possible diversity and coding gain, which satis ed the su cient conditions for maximum diversity. From these 288 codes, we selected one of the 24 codes with largest AP (2), CP (2) and AP (3) to put in Table1 . By examining the slope of the FER performance 4In the count of 1840 given, several of the codes counted could be considered to be equivalent.... In PAGE 9: ... All of the 96 codes provided the same AP (3) of 5:76, and 24 of them gave best CP (3) of 6:24. Thus we selected the rst one we encountered with CP (3) = 6:24 to put in Table1 . We were also able to show that any codes satisfying the necessary conditions for maximum diversity could not provide = p32 without yielding AP (3) lt; 5:76.... In PAGE 9: ... We were also able to show that any codes satisfying the necessary conditions for maximum diversity could not provide = p32 without yielding AP (3) lt; 5:76. For the 32-state case we found that the maximum coding gain is = 6 and one such code with best CP (3) = 6:33 and best CP (4) = 8:72 is put in Table1 . Further this code satis es the su cient conditions for maximum diversity gain.... In PAGE 9: ... Further this code satis es the su cient conditions for maximum diversity gain. We note that a few of our calculations concerning the cases in Table1 di er from those in [1] and [4]. The coding gain for the 8-state case from [1] is = p12 based on our calculations, as opposed to = p20 as stated in [1].... In PAGE 11: ... We conjecture it achieves near optimum (if not optimum) coding gain. 5 Probability of Frame Error Performance Figure 1 shows the frame error rate of the space-time codes listed in Table1 for cases with 2 transmit and 2 receive antennas. Figure 1 illustrates the gain achieved by increasing the constraint length of the codes.... In PAGE 11: ... Figures 2 through 4 compare the performance of our codes and those from [1, 4, 6]. Clearly, the codes in Table1 are better than the codes from [1, 4, 6] when judged in terms of frame error rate. In all our simulations, each frame consists of 130 transmissions from of each transmit antenna (` = 130).... In PAGE 14: ... We discuss the signi cance of augmenting with AP (L) and CP (L), using the cases in Figure 2 as an example. Consider the 4 state code GT = 0 B @ 2 0 1 2 2 2 2 1 1 C A (9) given in Table1 . Recall this code provides a coding gain of p8.... ..."

Cited by 15

### Table 5: Cost: Space, time, and iterations (averages).

"... In PAGE 9: ... Therefore we have designed our algorithm to be on the conservative side, more matches are found than truly exist. Table5 gives the total space (in MegaBytes) and time (in sec- onds) cost of matching. The space cost mainly arises due to the dynamic history used while the time represents the effort it takes to match all of the execution functions.... In PAGE 9: ...We also studied how quickly our iterative matching algorithm stabilizes. In Table5 the average depth of the dDDGs across all functions is given (dDDG Depth). The average number of iterations (Num.... ..."

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

"... In PAGE 12: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 12: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 12: ... (Link3 always uses Huffman codes to encode deletes.) The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 13: ... Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 13: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 13: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 13: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

2001

"... In PAGE 12: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 12: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 12: ... (Link3 always uses Huffman codes to encode deletes.) The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 13: ... Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 13: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 13: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 13: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

2001

"... In PAGE 12: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 12: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 12: ... (Link3 always uses Huffman codes to encode deletes.) The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 13: ... Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 13: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 13: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 13: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

"... In PAGE 10: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 10: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 11: ...The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 11: ... Our SCC algorithm [Tar72] finds all such components.) Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 11: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 11: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 11: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 1: System Parameters of BPSK with space-time transmit diversity

"... In PAGE 3: ....5 Numerical Results................................................................................................31 Table1 : System Parameters of BPSK with space-time transmit diversity.... ..."

### Table 1. Space-time characteristics of possible legislative drafting systems

2004

### Table 2. Configuration and performance of various configurations of the discrete space/time (Eulerian) Calanus model which we investigated. The final row of the table shows the applicable equivalent figures for Lagrangian implementation

in Ecology 2001

"... In PAGE 9: ... In addition to the direct effect of reducing the number of cells to be processed, the mixing update time increase needed to maintain the relativity of quadrat size and diffusion length produces further speed gains which are yet further augmented by the possibility of reducing development resolution without loss of accuracy. The configurations we investigated are shown in Table2 , together with the run-time for Fig. 4.... ..."