• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 501
Next 10 →

The HP AutoRAID hierarchical storage system

by John Wilkes, Richard Golding, Carl Staelin, Tim Sullivan - ACM Transactions on Computer Systems , 1995
"... Configuring redundant disk arrays is a black art. To configure an array properly, a system administrator must understand the details of both the array and the workload it will support. Incorrect understanding of either, or changes in the workload over time, can lead to poor performance. We present a ..."
Abstract - Cited by 263 (15 self) - Add to MetaCart
that is extremely easy to use, is suitable for a wide variety of workloads, is largely insensitive to dynamic workload changes, and performs much better than disk arrays with comparable numbers of spindles and much larger amounts of front-end RAM cache. Because the implementation of the HP AutoRAID technology

Locality-Aware Request Distribution in Cluster-based Network Servers

by Vivek Pai, Mohit Aron, Gaurav Banga, Michael Svendsen, Peter Druschel, Willy Zwaenepoel, Erich Nahum , 1998
"... We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes, to choose ..."
Abstract - Cited by 327 (21 self) - Add to MetaCart
We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes

The filter cache: An energy efficient memory structure

by Johnson Kin, Munish Gupta, William H. Mangione-smith - In Proceedings of the 1997 International Symposium on Microarchitecture , 1997
"... Most modern microprocessors employ one or two levels of on-chip caches in order to improve performance. These caches are typically implemented with static RAM cells and often occupy a large portion of the chip area. Not surprisingly, these caches often consume a significant amount of power. In many ..."
Abstract - Cited by 222 (4 self) - Add to MetaCart
Most modern microprocessors employ one or two levels of on-chip caches in order to improve performance. These caches are typically implemented with static RAM cells and often occupy a large portion of the chip area. Not surprisingly, these caches often consume a significant amount of power. In many

UNIX Disk Access Patterns

by Chris Ruemmler, John Wilkes , 1993
"... Disk access patterns are becoming ever more important to understand as the gap between processor and disk performance increases. The study presented here is a detailed characterization of every lowlevel disk access generated by three quite different systems over a two month period. The contributions ..."
Abstract - Cited by 277 (20 self) - Add to MetaCart
-volatile memory per disk could reduce disk traffic by 10-- 18%, and 90% of metadata write traffic can be absorbed with as little as 0.2MB per disk of nonvolatile RAM. Even 128KB of NVRAM cache in each disk can improve write performance by as much as a factor of three. FCFS scheduling...

A Stream Processor Front-end

by Alex Ramírez, Josep Ll. Larriba-Pey, Mateo Valero , 2000
"... This work proposes a new fetch unit model, inspired in the trace processor [8]. Instead of fetching instruction traces, our fetch unit will fetch instruction streams. An instruction stream is a sequential run of instructions, dened by the starting address and the stream length. All branches inclu ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
per cycle, instruction cache misses, branch prediction throughput and branch prediction accuracy. 1 Fetch performance 1.1 Width of instruction fetch All instructions in a stream are consecutive in memory, and a stream contains no taken branches. This makes it very simple to obtain several

A Scalable Front-End Architecture for Fast Instruction Delivery

by Glenn Reinman, Todd Austin, Brad Calder , 1999
"... In the pursuit of instruction-level parallelism, significant demands are placed on a processor's instruction delivery mechanism. Delivering the performance necessary to meet future processor execution targets requires that the performance of the instruction delivery mechanism scale with the exe ..."
Abstract - Cited by 74 (12 self) - Add to MetaCart
with the execution core. Attaining these targets is a challenging task due to I-cache misses, branch mispredictions, and taken branches in the instruction stream. To further complicate matters, a VLSI interconnect scaling trend is materializing that further limits the performance of front-end designs in future

Super-Scalar RAM-CPU Cache Compression

by M. Zukowski, S. Héman, N. Nes, P. A. Boncz, Marcin Zukowski, Sándor Héman, Niels Nes, Peter Boncz - In Proceedings of the International Conference of Data Engineering (IEEE ICDE , 2006
"... CWI is a founding member of ERCIM, the European Research Consortium for Informatics and Mathematics. CWI's research has a theme-oriented structure and is grouped into four clusters. Listed below are the names of the clusters and in parentheses their acronyms. ..."
Abstract - Cited by 106 (18 self) - Add to MetaCart
CWI is a founding member of ERCIM, the European Research Consortium for Informatics and Mathematics. CWI's research has a theme-oriented structure and is grouped into four clusters. Listed below are the names of the clusters and in parentheses their acronyms.

My cache or yours? Making storage more exclusive

by Theodore M. Wong, John Wilkes - In Proceedings of the 2002 USENIX Annual Technical Conference , 2002
"... Modern high-end disk arrays often have several gigabytes of cache RAM. Unfortunately, most array caches use management policies which duplicate the same data blocks at both the client and array levels of the cache hierarchy: they are inclusive. Thus, the aggregate cache behaves as if it was only as ..."
Abstract - Cited by 125 (0 self) - Add to MetaCart
Modern high-end disk arrays often have several gigabytes of cache RAM. Unfortunately, most array caches use management policies which duplicate the same data blocks at both the client and array levels of the cache hierarchy: they are inclusive. Thus, the aggregate cache behaves as if it was only

The Rio File Cache: Surviving Operating System Crashes

by Peter M. Chen, Wee Teck Ng, Gurushankar Rajamani, Christopher M. Aycock - In Proc. 7th Intl. Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS , 1996
"... Abstract: One of the fundamental limits to high-performance, high-reliability file systems is memory’s vulnerability to system crashes. Because memory is viewed as unsafe, systems periodically write data back to disk. The extra disk traffic lowers performance, and the delay period before data is saf ..."
Abstract - Cited by 132 (13 self) - Add to MetaCart
is safe lowers reliability. The goal of the Rio (RAM I/O) file cache is to make ordinary main memory safe for persistent storage by enabling memory to survive operating system crashes. Reliable memory enables a system to achieve the best of both worlds: reliability equivalent to a write-through file cache

The microarchitecture of the pentium 4 processor

by Dave Sager, Desktop Platforms Group, Intel Corp - Intel Technology Journal , 2001
"... ALU, deep pipelining This paper describes the Intel ® NetBurst™ microarchitecture of Intel’s new flagship Pentium ® 4 processor. This microarchitecture is the basis of a new family of processors from Intel starting with the Pentium 4 processor. The Pentium 4 processor provides a substantial performa ..."
Abstract - Cited by 187 (0 self) - Add to MetaCart
performance gain for many key application areas where the end user can truly appreciate the difference. In this paper we describe the main features and functions of the NetBurst microarchitecture. We present the frontend of the machine, including its new form of instruction cache called the Execution Trace
Next 10 →
Results 1 - 10 of 501
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University