Results 1 
4 of
4
Optimal ReadOnce Parallel Disk Scheduling
 in IOPADS
, 1999
"... An optimal prefetching and I/O scheduling algorithm LOPT, for parallel I/O systems, using a readonce model of block references is presented. The algorithm uses knowledge of the next L references, Lblock lookahead, to create a minimallength I/O schedule. We show that the competitive ratio of L ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
An optimal prefetching and I/O scheduling algorithm LOPT, for parallel I/O systems, using a readonce model of block references is presented. The algorithm uses knowledge of the next L references, Lblock lookahead, to create a minimallength I/O schedule. We show that the competitive ratio of LOPT is ( p MD=L), L M , which matches the lower bound of any prefetching algorithm with Lblock lookahead. Tight bounds for the remaining ranges of lookahead are also presented. In addition we show that LOPT is the optimal offline algorithm: when the lookahead consists of the entire reference string, it performs the absolute minimum possible number of I/Os. Finally, we show that LOPT is comparable to the best online algorithm with the same amount of lookahead; the ratio of the length of its schedule to the length of the optimal schedule is always within a constant factor of the best possible. Supported in part by the National Science Foundation under grant CCR9704562 an...
ASP: Adaptive Online Parallel Disk Scheduling
 PROC. OF DIMACS WKSHP. ON EXT. MEMORY ALGORITHMS AND VISUALIZATION, DIMACS
, 1998
"... In this work we address the problems of prefetching and I/O scheduling for readonce reference strings in a parallel I/O system. We use the standard parallel disk model with D disks a shared I/O buffer of size M. We design an online algorithm ASP (Adaptive Segmented Prefetching) with MLblock loo ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this work we address the problems of prefetching and I/O scheduling for readonce reference strings in a parallel I/O system. We use the standard parallel disk model with D disks a shared I/O buffer of size M. We design an online algorithm ASP (Adaptive Segmented Prefetching) with MLblock lookahead, L 1, and compare its performance to the best online algorithm with the same lookahead. We show that for any reference string the number of I/Os done by ASP is with a factor \Theta(C), C = minf p
Tight Competitive Ratios for Parallel Disk Prefetching and Caching
, 2008
"... We consider the natural extension of the wellknown single disk caching problem to the parallel disk I/O model (PDM) [17]. The main challenge is to achieve as much parallelism as possible and avoid I/O bottlenecks. We are given a fast memory (cache) of size M memory blocks along with a request seque ..."
Abstract
 Add to MetaCart
We consider the natural extension of the wellknown single disk caching problem to the parallel disk I/O model (PDM) [17]. The main challenge is to achieve as much parallelism as possible and avoid I/O bottlenecks. We are given a fast memory (cache) of size M memory blocks along with a request sequence Σ = (b1, b2,..., bn) where each block bi resides on one of D disks. In each parallel I/O step, at most one block from each disk can be fetched. The task is to serve Σ in the minimum number of parallel I/Os. Thus, each I/O is analogous to a page fault. The difference here is that during each page fault, up to D blocks can be brought into memory, as long as all of the new blocks entering the memory reside on different disks. The problem has a long history [18, 12, 13, 26]. Note that this problem is nontrivial even if all requests in Σ are unique. This restricted version is called readonce. Despite the progress in the offline version [13, 15] and readonce version [12], the general online problem still remained open. Here, we provide comprehensive results with a full general solution for the problem with asymptotically tight competitive ratios. To exploit parallelism, any parallel disk algorithm needs a certain amount of lookahead into future requests. To provide effective caching, an online algorithm must achieve o(D) competitive ratio. We show a lower bound that states, for lookahead L ≤ M, any online algorithm must be Ω(D)competitive. For lookahead L greater than M(1 + 1/ɛ), where ɛ is a constant, the tight upper bound of O ( p MD/L) on competitive ratio is achieved by our algorithm SKEW. The previous algorithm tLRU [26] was O((MD/L) 2/3) competitive and this was also shown to be tight [26] for an LRUbased strategy. We achieve the tight ratio using a fairly
Optimal ReadOnce Parallel Disk Scheduling
, 1999
"... ... scheduling in parallel UO systems using a readonce model of block reference. The algorithm usesknowledge of the next L block references, Lblock lookahead, to schedule UOs in a” online manner. It uses a dynamic priority assiwent scheme to decide when blocks should be prefetched, so as to minim ..."
Abstract
 Add to MetaCart
(Show Context)
... scheduling in parallel UO systems using a readonce model of block reference. The algorithm usesknowledge of the next L block references, Lblock lookahead, to schedule UOs in a” online manner. It uses a dynamic priority assiwent scheme to decide when blocks should be prefetched, so as to minimize the total number of UOs. The parallel disk model of a” UO system is used tu study the perfonnancc of LOPT. We show that LOFT is comparable to the best online algorithm with the same amuun, of lookahead; the ratio of the length of its schedule to the length of the optimal schedule is within a constant factor of the best possible show that the competitive ratio of LOFT is 0( matches the lower bound on the competitive ratio of any prefetching algorithm with Lblock lookahead. I ” addition we show that when the lookahead consists of the entire reference string, LOFT perfmms the tibnmu possible number of UOs; hence LOFT is the optimal &line algorithm. Finally, using synthetic traces we empirically study the perfonnancc characteristics of LOFT.