• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 141
Next 10 →

Power Efficient Instruction Caches for Embedded Systems

by Dinesh C. Suresh, Walid A. Najjar, Jun Yang
"... Abstract. Instruction caches typically consume 27 % of the total power in modern high-end embedded systems. We propose a compiler-managed instruction store architecture (K-store) that places the computation intensive loops in a scratchpad like SRAM memory and allocates the remaining instructions to ..."
Abstract - Add to MetaCart
Abstract. Instruction caches typically consume 27 % of the total power in modern high-end embedded systems. We propose a compiler-managed instruction store architecture (K-store) that places the computation intensive loops in a scratchpad like SRAM memory and allocates the remaining instructions

Multiple-Valued Caches for Power-Efficient Embedded Systems

by Emre Özer, Resit Sendag, David Gregg
"... In this paper, we propose three novel cache models using Multiple-Valued Logic (MVL) paradigm to reduce the cache data storage area and cache energy consumption for embedded systems. Multiple-valued caches have significant potential for compact and powerefficient cache array design. The cache models ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
In this paper, we propose three novel cache models using Multiple-Valued Logic (MVL) paradigm to reduce the cache data storage area and cache energy consumption for embedded systems. Multiple-valued caches have significant potential for compact and powerefficient cache array design. The cache

Energy-Efficient Design of Battery-Powered Embedded Systems

by Tajana Simunic, Luca Benini, Giovanni De Micheli , 1999
"... Energy-efficient design of battery-powered systems demands optimizations in both hardware and software. We present a modular approach for enhancing instruction level simulators with cycle-accurate simulation of energy dissipation in embedded systems. Our methodology has tightly coupled component mod ..."
Abstract - Cited by 72 (7 self) - Add to MetaCart
Energy-efficient design of battery-powered systems demands optimizations in both hardware and software. We present a modular approach for enhancing instruction level simulators with cycle-accurate simulation of energy dissipation in embedded systems. Our methodology has tightly coupled component

Block Cache for Embedded Systems

by Dominic Hillenbr
"... Abstract — On chip memories provide fast and energy efficient storage for code and data in comparison to caches or external memories. We present techniques and algorithms that allow for an automated use of on chip memory for code blocks of instruc-tions which are dynamically scheduled at runtime to ..."
Abstract - Add to MetaCart
Abstract — On chip memories provide fast and energy efficient storage for code and data in comparison to caches or external memories. We present techniques and algorithms that allow for an automated use of on chip memory for code blocks of instruc-tions which are dynamically scheduled at runtime

An efficient direct mapped instruction cache for application-specific embedded systems

by Chuanjun Zhang - In Proceedings of the Third IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis , 2005
"... Caches may consume half of a microprocessor’s total power and cache misses incur accessing off-chip memory, which is both time consuming and energy costly. Therefore, minimizing cache power consumption and reducing cache misses are important to reduce total energy consumption of embedded systems. Di ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
. Direct mapped caches consume much less power than that of same sized set associative caches but with a poor hit rate on average. Through experiments, we observe that memory space of direct mapped instruction caches is not used efficiently in most embedded applications. We design an efficient cache – a

FAWN: A Fast Array of Wimpy Nodes

by David G. Andersen, Jason Franklin, Amar Phanishayee, Lawrence Tan, Vijay Vasudevan , 2008
"... This paper introduces the FAWN—Fast Array of Wimpy Nodes—cluster architecture for providing fast, scalable, and power-efficient key-value storage. A FAWN links together a large number of tiny nodes built using embedded processors and small amounts (2–16GB) of flash memory into an ensemble capable of ..."
Abstract - Cited by 212 (26 self) - Add to MetaCart
This paper introduces the FAWN—Fast Array of Wimpy Nodes—cluster architecture for providing fast, scalable, and power-efficient key-value storage. A FAWN links together a large number of tiny nodes built using embedded processors and small amounts (2–16GB) of flash memory into an ensemble capable

Code Compression for Low Power Embedded System Design

by Haris Lekatsas, Jörg Henkel, Wayne Wolf , 2000
"... We propose instruction code compression as a very efficient method for reducing power on an embedded system. Our approach is the first one to measure and optimize the power consumption of a complete SOC (System--On--a--Chip) comprising a CPU, instruction cache, data cache, main memory, data buses an ..."
Abstract - Cited by 64 (9 self) - Add to MetaCart
We propose instruction code compression as a very efficient method for reducing power on an embedded system. Our approach is the first one to measure and optimize the power consumption of a complete SOC (System--On--a--Chip) comprising a CPU, instruction cache, data cache, main memory, data buses

Improving Power Efficiency with Compiler-Assisted Cache Replacement

by Hongbo Yang, R. Govindarajan, Guang R. Gao, Ziang Hu
"... Abstract — Data cache in embedded systems plays the roles of both speeding up program execution and reducing power consumption. However, a hardware-only cache management scheme usually results in unsatisfactory cache utilization. In several new architectures, cache management details are accessible ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Abstract — Data cache in embedded systems plays the roles of both speeding up program execution and reducing power consumption. However, a hardware-only cache management scheme usually results in unsatisfactory cache utilization. In several new architectures, cache management details are accessible

Memory Design and Exploration for Low Power, Embedded Systems

by Wen-Tsong Shiue , Chaitali Chakrabarti , 2001
"... In this paper, we describe a procedure for memory design and exploration for low power embedded systems. Our system consists of an instruction cache and a data cache on-chip, and a large memory off-chip. In the first step, we try to reduce the power consumption due to memory traffic by applying me ..."
Abstract - Cited by 22 (3 self) - Add to MetaCart
In this paper, we describe a procedure for memory design and exploration for low power embedded systems. Our system consists of an instruction cache and a data cache on-chip, and a large memory off-chip. In the first step, we try to reduce the power consumption due to memory traffic by applying

Fast Instruction Memory Hierarchy Power Exploration for Embedded Systems

by Nikolaos Kroupis , Dimitrios Soudris
"... A typical instruction memory design exploration process using simulation tools for various cache parameters is a rather time-consuming process, even for low complexity applications. In order to design a power efficient memory hierarchy of an embedded system, a huge number of system simulations are ..."
Abstract - Add to MetaCart
A typical instruction memory design exploration process using simulation tools for various cache parameters is a rather time-consuming process, even for low complexity applications. In order to design a power efficient memory hierarchy of an embedded system, a huge number of system simulations
Next 10 →
Results 1 - 10 of 141
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University