Results 1 - 10
of
162
Cache-Aware Scheduling and Analysis for Multicores ∗
"... The major obstacle to use multicores for real-time applications is that we may not predict and provide any guarantee on real-time properties of embedded software on such platforms; the way of handling the on-chip shared resources such as L2 cache may have a significant impact on the timing predictab ..."
Abstract
-
Cited by 35 (5 self)
- Add to MetaCart
predictability. In this paper, we propose to use cache space isolation techniques to avoid cache contention for hard realtime tasks running on multicores with shared caches. We present a scheduling strategy for real-time tasks with both timing and cache space constraints, which allows each task to use a fixed
DEMB: Cache-Aware Scheduling for Distributed Query Processing ⋆
"... Abstract. Leveraging data in distributed caches for large scale query process-ing applications is becoming more important, given current trends toward build-ing large scalable distributed systems by connecting multiple heterogeneous less powerful machines rather than purchasing expensive homogeneous ..."
Abstract
- Add to MetaCart
homogeneous and very pow-erful machines. As more servers are added to such clusters, more memory is available for caching data objects across the distributed machines. However the cached objects are dispersed and traditional query scheduling policies that take into account only load balancing do
A Hybrid Framework Bridging Locality Analysis and Cache-Aware Scheduling for CMPs
, 2007
"... Industry is rapidly moving towards the adoption of Chip Multi-Processors (CMPs). The sharing of memory hierarchy becomes deeper and heterogeneous. Without a good understanding of the sharing, most current systems schedule processes in a contention-oblivious way, causing systems severely underutilize ..."
Abstract
- Add to MetaCart
of cache contention at the same time. The goal is to produce a comprehensive understanding of the relations between program characteristics and run-time behavior in shared-cache systems, meanwhile developing a scalable adaptive contention-aware scheduling system. The preliminary experiments demonstrate
Lightweight Task Analysis for Cache-Aware Scheduling on Heterogeneous Clusters (PDPTA’08)
"... We present a novel characterization of how a pro-gram stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clus-ters. It is based on the estimation of stack distance probability distributions. The analysis requires ..."
Abstract
- Add to MetaCart
We present a novel characterization of how a pro-gram stresses cache. This characterization permits fast performance prediction in order to simulate and assist task scheduling on heterogeneous clus-ters. It is based on the estimation of stack distance probability distributions. The analysis
Optimizing Integrated Application Performance with Cache-aware Metascheduling
"... Abstract. Integrated applications running in multi-tenant environments are often subject to quality-of-service (QoS) requirements, such as resource and performance constraints. It is hard to allocate resources between multiple users accessing these types of applications while meeting all QoS constra ..."
Abstract
- Add to MetaCart
while meeting QoS constraints. First, we present cache-aware metascheduling, which is a novel approach to modifying system execution schedules to increase cache-hit rate and reduce system execution time. Second, we apply cache-aware metascheduling to 11 simulated software systems to create 2 different
Cache-Aware Compositional Analysis of Real-Time Multicore
"... Abstract—Multicore processors are becoming ubiquitous, and it is becoming increasingly common to run multiple real-time systems on a shared multicore platform. While this trend helps to reduce cost and to increase performance, it also makes it more challenging to achieve timing guarantees and functi ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
interference between tasks but also on the indirect interference between virtual processors and the tasks executing on them. In this paper, we present a cache-aware compositional analysis technique that can be used to ensure timing guarantees of components scheduled on a multicore virtualization platform. Our
On the Design and Implementation of a Cache-Aware Multicore Real-Time Scheduler
, 2009
"... Multicore architectures, which have multiple processing units on a single chip, have been adopted by most chip manufacturers. Most such chips contain on-chip caches that are shared by some or all of the cores on the chip. Prior work has presented methods for improving the performance of such caches ..."
Abstract
-
Cited by 28 (1 self)
- Add to MetaCart
-related performance gains. This paper addresses these two issues in an implementation of a cacheaware soft real-time scheduler within Linux, and shows that the use of this scheduler can result in performance improvements that directly result from a decrease in shared cache miss rates.
LWFG: A Cache-Aware Multi-core Real-Time Scheduling Algorithm
, 2012
"... As the number of processing cores contained in modern processors continues to increase, cache hierarchies are becoming more complex. This added complexity has the effect of increasing the potential cost of any cache misses on such architectures. When cache misses become more costly, minimizing them ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
becomes even more important, particularly in terms of scalability concerns. In this thesis, we consider the problem of cache-aware real-time scheduling on multipro-cessor systems. One avenue for improving real-time performance on multi-core platforms is task partitioning. Partitioning schemes statically
Cache-Aware GPU Memory Scheduling Scheme for CT Back-Projection
- IEEE Medical Imaging Conference
, 2010
"... Abstract–Graphic process units (GPUs) are well suited to computing-intensive tasks and are among the fastest solutions to perform Computed Tomography (CT) reconstruction. As previous research shows, the bottleneck of GPU-implementation is not the computational power, but the memory bandwidth. We pro ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
propose a cache-aware memory-scheduling scheme for the backprojection, which can ensure a better load-balancing between GPU processors and the GPU memory. The proposed reshuffling method can be directly applied on existing GPU-accelerated CT reconstruction pipelines. The experimental results show that our
Results 1 - 10
of
162