Results 1 - 10
of
37,788
Table 1: Communication Overheads. In the remote case, applications run on client nodes which request data from a server node over a network. In the local case, applications run directly on the server nodes. All overheads assume that data is mirrored. GPN denotes a General Purpose Network. SAN denotes a Storage Area Network. In the case of Petal, the collection of local storage links are treated as a degenerate storage area network. Numbers in the msgs columns denote the number of required messages. Numbers in the data columns denote the number of required data transfers. Numbers in parenthesis denote overheads with NVRAM support at the server nodes. In the remote case, Snappy Disk is more efficient for write requests than Petal. When supporting NVRAM, both Snappy Disk and Petal require the same total number of messages and data transfers. In the local case, Snappy Disk has a significant performance advantage for all cases because any server node can directly access any disk.
1998
"... In PAGE 7: ... We are mainly interested in the number of messages and the number of times that data must be transferred over the general purpose and storage area networks. Table1 summarizes the results. A remote read in a shared-disk architecture is initiated by sending a read request message from a client node to a server node.... ..."
Cited by 5
Table 1: Communication Overheads. In the remote case, applications run on client nodes which request data from a server node over a network. In the local case, applications run directly on the server nodes. All overheads assume that data is mirrored. GPN denotes a General Purpose Network. SAN denotes a Storage Area Network. In the case of Petal, the collection of local storage links are treated as a degenerate storage area network. Numbers in the msgs columns denote the number of required messages. Numbers in the data columns denote the number of required data transfers. Numbers in parenthesis denote overheads with NVRAM support at the server nodes. In the remote case, Snappy Disk is more efficient for write requests than Petal. When supporting NVRAM, both Snappy Disk and Petal require the same total number of messages and data transfers. In the local case, Snappy Disk has a significant performance advantage for all cases because any server node can directly access any disk.
1998
"... In PAGE 14: ... We are mainly inter- ested in the number of messages and the number of times that data must be trans- ferred over the general purpose and storage area networks. Table1 summarizes the results. A remote read in a shared-disk architecture is initiated by sending a read re- quest message from a client node to a server node.... ..."
Cited by 5
Table 1: Communication Overheads. In the remote case, applications run on client nodes which request data from a server node over a network. In the local case, applications run directly on the server nodes. All overheads assume that data is mirrored. GPN denotes a General Purpose Network. SAN denotes a Storage Area Network. In the case of Petal, the collection of local storage links are treated as a degenerate storage area network. Numbers in the msgs columns denote the number of required messages. Numbers in the data columns denote the number of required data transfers. Numbers in parenthesis denote overheads with NVRAM support at the server nodes. In the remote case, Snappy Disk is more efficient for write requests than Petal. When supporting NVRAM, both Snappy Disk and Petal require the same total number of messages and data transfers. In the local case, Snappy Disk has a significant performance advantage for all cases because any server node can directly access any disk.
1998
"... In PAGE 14: ... We are mainly inter- ested in the number of messages and the number of times that data must be trans- ferred over the general purpose and storage area networks. Table1 summarizes the results. A remote read in a shared-disk architecture is initiated by sending a read re- quest message from a client node to a server node.... ..."
Cited by 5
Table 2. Firewall Filters and Actions
2003
"... In PAGE 7: ... The disadvantage of encoding rules like this is that it becomes difficult to parallelize rule classification over a set of different rules. Table2 lists some example classification filters and actions common in firewalls. Intelligent Storage Storage Area Networks (SANs) consist of a collection of SCSI disks connected together through a fast interconnect, typically Fibre Channel.... ..."
Table 2. Firewall Filters and Actions
2003
"... In PAGE 7: ... The disadvantage of encoding rules like this is that it becomes difficult to parallelize rule classification over a set of different rules. Table2 lists some example classification filters and actions common in firewalls. Intelligent Storage Storage Area Networks (SANs) consist of a collection of SCSI disks connected together through a fast interconnect, typically Fibre Channel.... ..."
Table 1: Simulation variables
"... In PAGE 38: ... However, the closeted nature of a private storage area network cluster does mitigate the ill effects of discounting the areas of security and authenti- cation somewhat. Table1 shows the dependent variables used and independent variables measured in our simulation experiments. The access latency is the amount of time between a client issuing a... ..."
Table 4: Area of various synaptic cells. Numbers in parentheses are relative areas of 6-bit digital synapses. Synaptic densities vary by more than 2 orders of magnitude among the cells reported. The last 5 entries were reported at the NIPS*91 VLSI Workshop.
1992
"... In PAGE 21: ...5 Memory Both synaptic storage density and energy are of critical importance in large scale networks. Table4 shows comparative area of various memory cells. MOSIS cells are RAM cells fabricated using standard logic processes.... ..."
Cited by 2
Table 1: Characteristics of sample storage and network devices.
1996
"... In PAGE 4: ... The bottleneck for data delivery over a network environment is either in storage I/O or network I/O bandwidth. Characteristics of several storage and network devices in consideration are listed in Table1 for reference [4, 5, 17, 19, 20, 21, 22, 23, 24, 25, 26]. In order to achieve modularity, we propose to use low cost server, which has high performance commodity processor such as Intel Pentium or PowerPC, and 132 MB/sec PCI bus for system interconnection [27, 28].... In PAGE 13: ... The number of CD players (Nac) in a CD jukebox imposes a cap on the number of concurrent streams that can be delivered. For the CD jukebox in Table1 , where Nac = 4. It can sustain 6 Mbps aggregate storage throughput for four 1.... ..."
Cited by 3
Table 4: Summary of the response time improvement. For each workload and each cache con guration, we compare the best aggressively-collaborative approach with the best hierarchy-aware approach and calculate the relative response time improvement in percentage. This table shows the minimum (MIN) and the maximum (MAX) improvement for each workload. In addition, it shows the average (AVG) improvement over all cache con gurations for each workload.
2005
"... In PAGE 10: ... Other workloads exhibit similar trends. in Figure 6 is at most 14:5% and on average the gain is between 0:1% and 1:6% ( Table4 ). Compared to the performance gain on a faster network shown in Figure 4, the bene t of aggressively-col- laborative approaches becomes even smaller.... In PAGE 10: ... Unfortunately, our results indicate that such bene t is small even with future low-latency SANs that are 10 times faster. The differ- ence between the best response time of aggressively-collaborative caching and hierarchy-aware caching is at most 17:0% and on av- erage the difference is between 0:3% and 1:9% ( Table4 ), which is only slightly better than that with current storage area networks. 8.... ..."
Cited by 7
Results 1 - 10
of
37,788