Results 1 -
3 of
3
Scalability of the RAMpage Memory Hierarchy
, 2000
"... The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. As the CPUDRAM speedgap grows, it is expected that the RAMpage approach should become more viable. Results in this paper show ..."
Abstract
-
Cited by 8 (8 self)
- Add to MetaCart
The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. As the CPUDRAM speedgap grows, it is expected that the RAMpage approach should become more viable. Results in this paper show that RAMpage scales better than a standard second-level cache, because the number of DRAM references is lower. Further, RAMpage allows the possibility of taking a context switch on a miss, which is shown to further improve scalability. The paper also suggests that memory wall work ought to include the TLB, which can represent a significant fraction of execution time. 1 Introduction The RAMpage memory hierarchy is an alternative to a conventional cache-based hierarchy, in which the lowest-level cache is managed as a paged memory, and DRAM becomes a paging device. The lowest-level cache is in effect an SRAM main memory. Disk remains as a secondary paging device. Major hardware components re...
Abstract Scalability of the RAMpage Memory Hierarchy
"... The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. As the CPU-DRAM speed gap grows, it is expected that the RAMpage approach should become more viable. Results in this paper sh ..."
Abstract
- Add to MetaCart
(Show Context)
The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. As the CPU-DRAM speed gap grows, it is expected that the RAMpage approach should become more viable. Results in this paper show that RAMpage scales better than a standard second-level cache, because the number of DRAM references is lower. Further, RAMpage allows the possibility of taking a context switch on a miss, which is shown to further improve scalability. The paper also suggests that memory wall work ought to include the TLB, which can represent a significant fraction of execution time. With context switches on misses, the speed improvement at an 8 GHz instruction issue rate is 62 % over a standard 2-level cache hierarchy.
Scalability of the RAMpage Memory Hierarchy
, 2000
"... The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. As the CPU-DRAM speed gap grows, it is expected that the RAMpage approach should become more viable. Results in this paper sh ..."
Abstract
- Add to MetaCart
The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. As the CPU-DRAM speed gap grows, it is expected that the RAMpage approach should become more viable. Results in this paper show that RAMpage scales better than a standard second-level cache, because the number of DRAM references is lower. Further, RAMpage allows the possibility of taking a context switch on a miss, which is shown to further improve scalability. The paper also suggests that memory wall work ought to include the TLB, which can represent a significant fraction of execution time. With context switches on misses, the speed improvement at an 8 GHz instruction issue rate is 62% over a standard 2-level cache hierarchy.