• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 162
Next 10 →

Cache-Aware Scheduling and Analysis for Multicores ∗

by Nan Guan, Martin Stigge, Wang Yi, Ge Yu
"... The major obstacle to use multicores for real-time applications is that we may not predict and provide any guarantee on real-time properties of embedded software on such platforms; the way of handling the on-chip shared resources such as L2 cache may have a significant impact on the timing predictab ..."
Abstract - Cited by 35 (5 self) - Add to MetaCart
predictability. In this paper, we propose to use cache space isolation techniques to avoid cache contention for hard realtime tasks running on multicores with shared caches. We present a scheduling strategy for real-time tasks with both timing and cache space constraints, which allows each task to use a fixed

DEMB: Cache-Aware Scheduling for Distributed Query Processing ⋆

by Junyong Lee, Youngmoon Eom, Alan Sussman, Beomseok Nam
"... Abstract. Leveraging data in distributed caches for large scale query process-ing applications is becoming more important, given current trends toward build-ing large scalable distributed systems by connecting multiple heterogeneous less powerful machines rather than purchasing expensive homogeneous ..."
Abstract - Add to MetaCart
homogeneous and very pow-erful machines. As more servers are added to such clusters, more memory is available for caching data objects across the distributed machines. However the cached objects are dispersed and traditional query scheduling policies that take into account only load balancing do

A Hybrid Framework Bridging Locality Analysis and Cache-Aware Scheduling for CMPs

by Xipeng Shen , 2007
"... Industry is rapidly moving towards the adoption of Chip Multi-Processors (CMPs). The sharing of memory hierarchy becomes deeper and heterogeneous. Without a good understanding of the sharing, most current systems schedule processes in a contention-oblivious way, causing systems severely underutilize ..."
Abstract - Add to MetaCart
of cache contention at the same time. The goal is to produce a comprehensive understanding of the relations between program characteristics and run-time behavior in shared-cache systems, meanwhile developing a scalable adaptive contention-aware scheduling system. The preliminary experiments demonstrate

Lightweight Task Analysis for Cache-Aware Scheduling on Heterogeneous Clusters (PDPTA’08)

by Sverre Jarp
"... We present a novel characterization of how a pro-gram stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clus-ters. It is based on the estimation of stack distance probability distributions. The analysis requires ..."
Abstract - Add to MetaCart
We present a novel characterization of how a pro-gram stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clus-ters. It is based on the estimation of stack distance probability distributions. The analysis

CASC: A Cache-Aware Scheduling Algorithm For Multithreaded Chip Multiprocessors,” http://research.sun.com/scalable/pubs/CASC.pdf

by Ra Fedorova, Margo Seltzer, Michael D. Smith, Christopher Small
"... ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract not found

Optimizing Integrated Application Performance with Cache-aware Metascheduling

by Brian Dougherty, Jules White, Russell Kegley, Jonathan Preston, Douglas C. Schmidt, Aniruddha Gokhale
"... Abstract. Integrated applications running in multi-tenant environments are often subject to quality-of-service (QoS) requirements, such as resource and performance constraints. It is hard to allocate resources between multiple users accessing these types of applications while meeting all QoS constra ..."
Abstract - Add to MetaCart
while meeting QoS constraints. First, we present cache-aware metascheduling, which is a novel approach to modifying system execution schedules to increase cache-hit rate and reduce system execution time. Second, we apply cache-aware metascheduling to 11 simulated software systems to create 2 different

Cache-Aware Compositional Analysis of Real-Time Multicore

by Virtualization Platforms, Meng Xu, Linh T. X, Phan Insup, Lee Oleg Sokolsky, Sisu Xi, Chenyang Lu, Christopher Gill
"... Abstract—Multicore processors are becoming ubiquitous, and it is becoming increasingly common to run multiple real-time systems on a shared multicore platform. While this trend helps to reduce cost and to increase performance, it also makes it more challenging to achieve timing guarantees and functi ..."
Abstract - Cited by 4 (3 self) - Add to MetaCart
interference between tasks but also on the indirect interference between virtual processors and the tasks executing on them. In this paper, we present a cache-aware compositional analysis technique that can be used to ensure timing guarantees of components scheduled on a multicore virtualization platform. Our

On the Design and Implementation of a Cache-Aware Multicore Real-Time Scheduler

by John M. Calandrino, James H. Anderson , 2009
"... Multicore architectures, which have multiple processing units on a single chip, have been adopted by most chip manufacturers. Most such chips contain on-chip caches that are shared by some or all of the cores on the chip. Prior work has presented methods for improving the performance of such caches ..."
Abstract - Cited by 28 (1 self) - Add to MetaCart
-related performance gains. This paper addresses these two issues in an implementation of a cacheaware soft real-time scheduler within Linux, and shows that the use of this scheduler can result in performance improvements that directly result from a decrease in shared cache miss rates.

LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm

by Aaron C. Lindsay, Binoy Ravindran Co-chair, Dennis G. Kafura, Anil Kumar, S. Vullikanti, Aaron C. Lindsay , 2012
"... As the number of processing cores contained in modern processors continues to increase, cache hierarchies are becoming more complex. This added complexity has the effect of increasing the potential cost of any cache misses on such architectures. When cache misses become more costly, minimizing them ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
becomes even more important, particularly in terms of scalability concerns. In this thesis, we consider the problem of cache-aware real-time scheduling on multipro-cessor systems. One avenue for improving real-time performance on multi-core platforms is task partitioning. Partitioning schemes statically

Cache-Aware GPU Memory Scheduling Scheme for CT Back-Projection

by Ziyi Zheng, Klaus Mueller, Senior Member - IEEE Medical Imaging Conference , 2010
"... Abstract–Graphic process units (GPUs) are well suited to computing-intensive tasks and are among the fastest solutions to perform Computed Tomography (CT) reconstruction. As previous research shows, the bottleneck of GPU-implementation is not the computational power, but the memory bandwidth. We pro ..."
Abstract - Cited by 4 (3 self) - Add to MetaCart
propose a cache-aware memory-scheduling scheme for the backprojection, which can ensure a better load-balancing between GPU processors and the GPU memory. The proposed reshuffling method can be directly applied on existing GPU-accelerated CT reconstruction pipelines. The experimental results show that our
Next 10 →
Results 1 - 10 of 162
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University