### TABLE I MODIFIED ALAMOUTI SPACE-TIME CODE

2005

Cited by 4

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

2001

"... In PAGE 12: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 12: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 12: ... (Link3 always uses Huffman codes to encode deletes.) The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 13: ... Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 13: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 13: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 13: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

"... In PAGE 12: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 12: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 12: ... (Link3 always uses Huffman codes to encode deletes.) The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 13: ... Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 13: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 13: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 13: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

2001

"... In PAGE 12: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 12: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 12: ... (Link3 always uses Huffman codes to encode deletes.) The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 13: ... Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 13: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 13: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 13: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 2. Space and time measurements for implementations of 7 day crawl dataset.

"... In PAGE 10: ...ages. The Link Database for it contains 351,546,665 URLs and 6,078,085,908 links. . A recent paper on Mercator [NW01] suggests that taking the first N days of a crawl is a good way to limit the amount of data to consider. Table2 presents the results for the first 7 days of this crawl. This... In PAGE 10: ... Space and time measurements for implementations of 7 day crawl dataset. Table2 presents the results. Each row presents data for a different implementation of the Link Database.... In PAGE 11: ...The first two data columns of Table2 contain the sizes of the databases, reported as the total number of bits used by the link data, including the starts array and any Huffman tables, divided by the total number of links. The third data column contains an approximation of the maximum database size (in millions of Web pages) that each technique can support on a machine with 16 GB of RAM.... In PAGE 11: ... Our SCC algorithm [Tar72] finds all such components.) Table2 makes very clear the space-time tradeoff we face: each step from Link1 to Link2 to Link3 approximately doubles the number of pages we can handle on our 16 GB machine, but each step also costs us in access time. The timing results tell an interesting story.... In PAGE 11: ... The relative performance gap between Link2 and Link3 also closes as we add more overhead, but not nearly as much. Table2 also illustrates why we do not use a Huffman code in practice. It saved 3-11% of space, but cost up to a factor of 2.... In PAGE 11: ...ut cost up to a factor of 2.5 in acocess time. We chose the faster option. Further, Table2 also shows that using 3 URL partitions saves space, primarily because the starts array can be compressed. Table 3 contains measurements for the full 58 day crawl for Link2 and Link3.... In PAGE 11: ... (The Link1 implementation cannot support 6 billion links.) Although the details of the numbers in Table 3 differ from Table2 , the overall and relative trends remain the same. The timing measurements are over the inlink database.... ..."

### Table 1: Mapping rule of the DM space-time code, Lt = 2, 4-ary

2000

Cited by 4

### TABLE II Optimum q-state 1 b/s/Hz BPSK 2-space-time codes.

2000

Cited by 21

### TABLE IV Optimum q-state 1 b/s/Hz BPSK 4-space-time codes.

2000

Cited by 21