Results 1  10
of
10
On the complexity of mapping linear chain applications onto heterogeneous platforms
 Parallel Processing Letters (PPL
, 2009
"... In this paper, we explore the problem of mapping simple application patterns onto largescale heterogeneous platforms. An important optimization criteria that should be considered in such a framework is the latency, or makespan, which measures the response time of the system in order to process one ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
In this paper, we explore the problem of mapping simple application patterns onto largescale heterogeneous platforms. An important optimization criteria that should be considered in such a framework is the latency, or makespan, which measures the response time of the system in order to process one single data set entirely. We focus in this work on linear chain applications, which are representative of a broad class of reallife applications. For such applications, we can consider onetoone mappings, in which each stage is mapped onto a single processor. However, in order to reduce the communication cost, it seems natural to group stages into intervals. The interval mapping problem can be solved in a straightforward way if the platform has homogeneous communications: the whole chain is grouped into a single interval, which in turn is mapped onto the fastest processor. But the problem becomes harder when considering a fully heterogeneous platform. Indeed, we prove the NPcompleteness of this problem. Furthermore, we prove that neither the interval mapping problem nor the similar onetoone mapping problem can be approximated by any constant factor (unless P=NP).
Reliability and performance optimization of pipelined realtime systems
, 2009
"... We consider pipelined realtime systems, commonly found in assembly lines, consisting of a chain of tasks executing on a distributed platform. Their processing is pipelined: each processor executes only one interval of consecutive tasks. We are therefore interested in minimizing both the inputoutpu ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
We consider pipelined realtime systems, commonly found in assembly lines, consisting of a chain of tasks executing on a distributed platform. Their processing is pipelined: each processor executes only one interval of consecutive tasks. We are therefore interested in minimizing both the inputoutput latency and the period. For dependability reasons, we are also interested in maximizing the reliability of the system. We therefore assign several processors to each task, so as to increase the reliability of the system. We assume that both processors and communication links are unreliable and subject to transient failures, the arrival of which follows a constant parameter Poisson law. We also assume that the failures are statistically independent events. We study several variants of this multiprocessor mapping problem with several hypotheses on the target platform (homogeneous/heterogeneous speeds and/or failure rates). We provide NPhardness complexity results, and optimal mapping algorithms for polynomial problem instances. Keywords: Pipelined realtime systems, interval mapping, multicriteria (reliability, latency, period) optimization, complexity results, dynamic programming algorithm. 1 Reliability
A Survey of Pipelined Workflow Scheduling: Models and Algorithms
, 2010
"... A large class of applications need to execute the same workflow on different data sets. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and orchestrating the execution by utilizing task, data, pipelined, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A large class of applications need to execute the same workflow on different data sets. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and orchestrating the execution by utilizing task, data, pipelined, and replicatedparallelism. The scheduling problem that encompasses all of these techniques is called pipelined workflow scheduling, and has been widely studied in the last decade. Multiple models and algorithms flourished to tackle various programming paradigms, constraints, machine behaviors or goals. This paper surveys the field by summing up and structuring known results and approaches.
throughput and reliability
, 2009
"... Mapping pipelined applications with replication to increase throughput and reliability Anne Benoit, ..."
Abstract
 Add to MetaCart
Mapping pipelined applications with replication to increase throughput and reliability Anne Benoit,
and
"... A large class of applications need to execute the same workflow on different data sets of identical size. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and the execution can be orchestrated by utilizing t ..."
Abstract
 Add to MetaCart
A large class of applications need to execute the same workflow on different data sets of identical size. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and the execution can be orchestrated by utilizing task, data, pipelined, and/or replicatedparallelism. The scheduling problem that encompasses all of these techniques is called pipelined workflow scheduling, and it has been widely studied in the last decade. Multiple models and algorithms have flourished to tackle various programming paradigms, constraints, machine behaviors or optimization goals. This paper surveys the field by summing up and structuring known results and approaches.
and
"... A large class of applications need to execute the same workflow on different data sets of identical size. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and the execution can be orchestrated by utilizing t ..."
Abstract
 Add to MetaCart
A large class of applications need to execute the same workflow on different data sets of identical size. Efficient execution of such applications necessitates intelligent distribution of the application components and tasks on a parallel machine, and the execution can be orchestrated by utilizing task, data, pipelined, and/or replicatedparallelism. The scheduling problem that encompasses all of these techniques is called pipelined workflow scheduling, and it has been widely studied in the last decade. Multiple models and algorithms have flourished to tackle various programming paradigms, constraints, machine behaviors or optimization goals. This paper surveys the field by summing up and structuring known results and approaches.
Scheduling linear chain streaming applications on heterogeneous systems with failures
, 2013
"... In this paper, we study the problem of optimizing the throughput of streaming applications for heterogeneous platforms subject to failures. Applications are linear graphs of tasks (pipelines), with a type associated to each task. The challenge is to map each task onto one machine of a target platfor ..."
Abstract
 Add to MetaCart
In this paper, we study the problem of optimizing the throughput of streaming applications for heterogeneous platforms subject to failures. Applications are linear graphs of tasks (pipelines), with a type associated to each task. The challenge is to map each task onto one machine of a target platform, each machine having to be specialized to process only one task type, given that every machine is able to process all the types before being specialized in order to avoid costly setups. The objective is to maximize the throughput, i.e., the rate at which jobs can be processed when accounting for failures. Each instance can thus be performed by any machine specialized in its type and the workload of the system can be shared among a set of specialized machines. For identical machines, we prove that an optimal solution can be computed in polynomial time. However the problem becomes NPhard when two machines may compute the same task type at different speeds. Several polynomial time heuristics are designed for the most realistic specialized settings. Simulation results assess their efficiency, showing that the best heuristics obtain a good throughput, much better than the throughput obtained with a random mapping. Moreover, the throughput is close to the optimal solution in the particular cases where the optimal throughput can be computed.