• Documents
  • Authors
  • Tables

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 153,867
Next 10 →

Table 3: System Resources Key Parameters The model simulates the occurrence of failures using a Poisson distribution with a mean failure arrival rate and a mean failure duration. Failures do not overlap at one site; that is, a site will not be subject to more than one failure at a time. Failures are uniformly distributed among the server sites. Failures are fail-stop [25]: all activities at the failed site cease upon a failure. Key information needed for the recovery of the site is assumed to be retained in stable storage. Site failures are implemented by interrupting all activities occurring at the failed site. The data structures and the resources at the site are then manipulated to simulate the e ect of a site failure. Site recoveries are implemented in a \coarse-grain quot; fashion. The model does not maintain a log for each site. Upon recovery, a site aborts all transactions that were not in a prepared state. All other transactions are recovered by waiting for a period of time (set at three times the message propagation time) to simulate the communication delay required for the recovered site to enquire about the status of the transaction, then committing or aborting the transaction as appropriate. Table 4 summarizes the key parameters associated with site failure generation:

in An Efficient Implementation of the Quorum Consensus Protocol
by M. L. Liu, D. Agrawal, A. El Abbadi
"... In PAGE 10: ...nd log disks (for system logs). Forced logs are executed on the log disk, which supports fast sequential access. Furthermore, each site maintains a cache for the data disk access. Table3 summarizes the key parameters... ..."

Table 2: Placement of Segments of Three Objects Across 6 Disks Using PRR Algorithm. them to store a stripe (since 7 is the greatest prime number less than or equal to 10). This is a 30% underutilization of available storage and also a 30% decrease in parallel I/O transfers. Second, if parity is added for fault tolerance [19], PRR would complicate the parity placement algorithm and in some cases cause the parity disk(s) to become a performance bottleneck. As an example, consider a PRR placement scheme with N = 6 and Np = 5. Using the PRR placement algorithm given by Equation 1, we show in Table 2 the placement of the segments of three multimedia objects in the disk array. Observe that if the parity for each stripe is placed in the unused disk in each stripe (a logical thing to do), the parity would not be evenly distributed over the disks. In fact, if we considered the three small objects shown in Table 2 as a single large object, all the parity would map to disks 0 and 1. A number of multimedia data layout schemes were de ned in [5]. Two of these are the Dis- 10

in Data Layout for Interactive Video-on-Demand Storage Systems
by Cyril U. Orji, Kingsley C. Nwosu 1996
Cited by 1

Table 3: Case Study Results Using standard techniques (see [6]) we can also compute values for the mean time to service loss (MTTSL) and the probability of service loss within a given amount of time (we use the term service loss instead of data loss | see Section 4.1.) If the mean time to failure for a disk is 500,000 hours, we get that the MTTSL of a disk array with 36 data disks and no parity disks is about a year and a half. The probability of service loss within 1 month is: 5%, 3 months: 14%, 6 months: 27%, 9 months: 37%, 12 months: 46%, and 18 months: 61%. To achieve better reliability, parity disks are needed, which makes choosing G = 36 unattractive because of the large number of parity disks (36). High reliability without loss of performance can be achieved for a lower price by choosing G = 6 which requires only 6 parity disks. 11These are reasonable rates for NTSC quality MPEG compressed movies (see [9]).

in Pipelined Disk Arrays for Digital Movie Retrieval
by Ariel Cohen Walter, Walter A. Burkhard, P. Venkat Rangan 1995
"... In PAGE 19: ... Mbits/s respectively11. Hence, the disk array can store around eighty 90 minute movies. The bu er size was constrained to be at most 8 MB per stream. Table3 shows the case study results. The + and { signs that appear after OAC and IAL signify whether the scheme was used (+) or not ({).... In PAGE 20: ...Table3 we see the substantial bene t obtained by using OAC and IAL. For example, for G = 6, we get an increase of 12% in the number of concurrent streams (from 102 to 114) and a decrease of 66% percent in the amount of bu ering required (from 6 MB/stream to 2 MB/stream).... ..."
Cited by 14

Table 3 summaries these de nitions. RAID 5 or SID without failure:

in Segmented Information Dispersal (SID) Data Layouts for Digital Video Servers
by Ariel Cohen, Walter A. Burkhard 2001
"... In PAGE 11: ...709040 u2 Seek time coe cient 0.090960 v2 Seek time coe cient 9 dc Disk capacity (GB) 1385 b Boundary between the square root and linear portions of the seek time Table 2: Disk Model Parameters D Number of disks in a parity group (for RAID 3) rc Consumption rate per stream (KB/s) ct Length of a reading cycle (sec) Video slice size (KB) S(m) Maximum total seek latency when reading m slices (sec) T(m; ) Maximum time (sec) required to read m slices of size q SID dispersal factor Table3 : Video Server Model Parameters Figure 10 shows how the bu ering requirement per stream varies with the total number of concurrent streams for a disk array with 12 disks and a video consumption rate of 4 Mbits/sec. The gure presents the performance of three data organizations: RAID 3, RAID 5, and SID.... ..."
Cited by 2

Table 5. Failure Characteristics for RAID Level-5 Disk Arrays.

in RAID: High-performance, reliable secondary storage
by Peter M. Chen, Edward K. Lee, Garth A. Gibson, Randy H. Katz, David A. Patterson 1994
Cited by 272

Table 6. Failure Characteristics for a P + Q disk array.

in RAID: High-performance, reliable secondary storage
by Peter M. Chen, Edward K. Lee, Garth A. Gibson, Randy H. Katz, David A. Patterson 1994
Cited by 272

Table 2 shows the distribution of the data in the non-uniformly distributed les among 3 disks. Number of Data Distribution on Disks

in CMD: A Multidimensional Declustering Method for Parallel Database Systems
by Jianzhong Li, Jaideep Srivastava, Doron Rotem 1992
"... In PAGE 22: ... The distribution of the data in uniformly distributed les among 3 disks. Number of Data Distribution on Disks Records Number of Blocks Number of Blocks Number of Blocks in Files on Disk 1 on Disk 2 on Disk 3 1000 21 27 19 10000 156 160 158 15000 246 237 241 20000 298 313 306 Table2 . The distribution of the data in non-uniformly distributed les among 3 disks.... In PAGE 22: ... Table 1 shows that the data in the uniformly distributed les is evenly distributed among disks. Table2 shows that the data in the non-uniformly distributed les is nearly equally distributed among disks even without using any rebalancing algorithm. Performance of range query processing.... ..."
Cited by 38

Table 3: Video Server Model Parameters Figure 10 shows how the bu ering requirement per stream varies with the total number of concurrent streams for a disk array with 12 disks and a video consumption rate of 4 Mbits/sec. The gure presents the performance of three data organizations: RAID 3, RAID 5, and SID. The redundancy ratio for all three organizations is 1=4 (i.e. the RAID 3 and RAID 5 layouts consist of three parity groups of size four, and the SID layout has a dispersal factor of 3.) Striping of size (slice sized), is utilized in RAID 5 and SID; striping within RAID 3 has size =D. Accordingly, for SID and RAID 5, each point is for a multiple of 12 streams; for RAID 3, each point is for a multiple of 3 streams. The poor performance of the RAID 3 organization (both with and without failure) and the RAID 5 organization under failure is noted and we return to this comparison in the next section. In both Figures 10 and 11, the perceived discontinuites for RAID 3 arise because the points are so close together; with the wider spacing (more streams per ensemble) the \discontinuites quot; are less pronounced.

in Segmented Information Dispersal (SID) Data Layouts for Digital Video Servers
by Ariel Cohen, Walter A. Burkhard 2001
"... In PAGE 10: ... There are three new terms within these expressions: tr denotes the worst case disk rotational latency in milliseconds, rt denotes the minimum transfer rate in kilobytes per second, and nally wmin denotes the minimal track size in in kilobytes (KB). Table3 summaries these de nitions. RAID 5 or SID without failure: T(m; ) = S(m) + m tr 1000 + rt + wmin tmin 1000 RAID 5 with failure: T(m; ) = S(2m) + 2m tr 1000 + rt + wmin tmin 1000 SID with failure: T(m; ) = S(2m) + m tr 1000 + rt + wmin tmin 1000 + + m tr 1000 + q rt + q wmin tmin 1000 RAID 3 with or without failure: T(m; =D) = S(m) + m tr 1000 + D rt + D wmin tmin 1000 The S expression denotes the worst case seek latency, the term tr=1000 denotes the worst case rotational latency, the term =rt denotes the worst case transfer time, and nally d =wmine tmin=1000 denotes the time to do track-to-track moves within a slice.... ..."
Cited by 2

Table 7: Performance Comparison between RAID-II and RAID-I. This table compares the performance of RAID-II to that of RAID-I. Because RAID-II has a special purpose parity engine, disk array write performance is comparable to disk array read performance. All writes in this test are full-stripe writes [Lee91b]. For RAID-II reads, data is read from the disk array into XBUS memory then sent over the HIPPI network back to XBUS memory. For RAID-I reads, data is read from the disk array into Sun4 memory, then copied again into Sun4 memory. This extra copy equalized the number of memory accesses per data word. For RAID-II writes, data starts in XBUS memory, is sent over HIPPI back into XBUS memory, parity is computed, and the data and parity are written to the disk subsystem. For RAID-I writes, data starts in Sun4 memory, gets copied to another location in Sun4 memory, then is written to disk. Meanwhile, parity is computed on the Sun4. RAID-I uses a 32 KB striping unit with 8 disks; RAID-II uses a 64 KB striping unit with 24 disks.

in RAID: High-Performance, Reliable Secondary Storage
by Peter M. Chen, Edward K. Lee, Garth A. Gibson, Randy H. Katz, David A. Patterson 1994
"... In PAGE 52: ...er interface. Figure 12 shows a block diagram for the controller. To explore how the XBUS card enhances disk array performance, Chen, et al. [Chen94] com- pare the performance of RAID-II to RAID-I ( Table7 ). RAID-I is basically RAID-II without the... ..."
Cited by 272

Table 7: Performance Comparison between RAID-II and RAID-I. This table compares the performance of RAID-II to that of RAID-I. Because RAID-II has a special purpose parity engine, disk array write performance is comparable to disk array read performance. All writes in this test are full-stripe writes [Lee91b]. For RAID-II reads, data is read from the disk array into XBUS memory then sent over the HIPPI network back to XBUS memory. For RAID-I reads, data is read from the disk array into Sun4 memory, then copied again into Sun4 memory. This extra copy equalized the number of memory accesses per data word. For RAID-II writes, data starts in XBUS memory, is sent over HIPPI back into XBUS memory, parity is computed, and the data and parity are written to the disk subsystem. For RAID-I writes, data starts in Sun4 memory, gets copied to another location in Sun4 memory, then is written to disk. Meanwhile, parity is computed on the Sun4. RAID-I uses a 32 KB striping unit with 8 disks; RAID-II uses a 64 KB striping unit with 24 disks.

in RAID: High-Performance, Reliable Secondary Storage
by Peter M. Chen, Edward K. Lee, Garth A. Gibson, Randy H. Katz, David A. Patterson 1994
"... In PAGE 52: ...er interface. Figure 12 shows a block diagram for the controller. To explore how the XBUS card enhances disk array performance, Chen, et al. [Chen94] com- pare the performance of RAID-II to RAID-I ( Table7 ). RAID-I is basically RAID-II without the... ..."
Cited by 272
Next 10 →
Results 1 - 10 of 153,867
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University