• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Informed Prefetching and Caching (1995)

Cached

  • Download as a PDF

Download Links

  • [www.cs.fiu.edu]
  • [reports.adm.cs.cmu.edu]
  • [reports.adm.cs.cmu.edu]
  • [www.pdl.cs.cmu.edu]
  • [www.cs.columbia.edu]
  • [cs.unomaha.edu]
  • [www1.cs.columbia.edu]
  • [www1.cs.columbia.edu]
  • [www.pdl.cmu.edu]
  • [www.ssrc.ucsc.edu]
  • [www.pdl.cs.cmu.edu]
  • [www.cs.cmu.edu]
  • [cs.uwaterloo.ca]
  • [cs.uwaterloo.ca]
  • [www.pdl.cmu.edu]
  • [www.eng.auburn.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by R. Hugo Patterson , Garth A. Gibson , Eka Ginting , Daniel Stodolsky , Jim Zelenka
Venue:In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles
Citations:402 - 10 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Patterson95informedprefetching,
    author = {R. Hugo Patterson and Garth A. Gibson and Eka Ginting and Daniel Stodolsky and Jim Zelenka},
    title = {Informed Prefetching and Caching},
    booktitle = {In Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles},
    year = {1995},
    pages = {79--95},
    publisher = {ACM Press}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

The underutilization of disk parallelism and file cache buffers by traditional file systems induces I/O stall time that degrades the performance of modern microprocessor-based systems. In this paper, we present aggressive mechanisms that tailor file system resource management to the needs of I/O-intensive applications. In particular, we show how to use application-disclosed access patterns (hints) to expose and exploit I/O parallelism and to allocate dynamically file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses. Our approach estimates the impact of alternative buffer allocations on application execution time and applies a cost-benefit analysis to allocate buffers where they will have the greatest impact. We implemented informed prefetching and caching in DEC’s OSF/1 operating system and measured its performance on a 150 MHz Alpha equipped with 15 disks running a range of applications including text search, 3D scientific visualization, relational database queries, speech recognition, and computational chemistry. Informed prefetching reduces the execution time of the first four of these applications by 20 % to 87%. Informed caching reduces the execution time of the fifth application by up to 30%.

Keyphrases

execution time    speech recognition    cost-benefit analysis    traditional file system induces    text search    modern microprocessor-based system    informed prefetching    stall time    aggressive mechanism    o-intensive application    computational chemistry    fifth application    scientific visualization    file system resource management    unhinted access    application execution time    application-disclosed access pattern    file buffer    mhz alpha    operating system    disk parallelism    relational database query    file cache buffer    alternative buffer allocation   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University