Results 1  10
of
26
Assignment and scheduling of communicating periodic tasks in distributed realtime systems
 IEEE Transactions on Software Engineering
, 1997
"... ABSTRACT We present an optimal solution to the problem of allocating communicating periodic tasks to heterogeneous processing nodes (PNs) in a distributed realtime system. The solution is optimal in the sense of minimizing the maximum normalized task response time, called the system hazard, subject ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
ABSTRACT We present an optimal solution to the problem of allocating communicating periodic tasks to heterogeneous processing nodes (PNs) in a distributed realtime system. The solution is optimal in the sense of minimizing the maximum normalized task response time, called the system hazard, subject to the precedence constraints resulting from intercommunication among the tasks to be allocated. Minimization of the system hazard ensures that the solution algorithm will allocate tasks so as to meet all task deadlines under an optimal schedule, whenever such an allocation exists. The task system is modeled with a task graph (TG), in which computation and communication modules, communication delays, and intertask precedence constraints are clearly described. Tasks described by this TG are assigned to PNs by using a branchandbound (B&B) search algorithm. The algorithm traverses a search tree whose leaves correspond to potential solutions to the task allocation problem. We use a bounding method that prunes, in polynomial time, nonleaf vertices that cannot lead to an optimal solution, while ensuring that the search path leading to an optimal solution will never be pruned. For each generated leaf vertex we compute the exact cost using the algorithm developed in [1]. The lowestcost leaf vertex (one with the least system hazard) represents an optimal task allocation. Computational experiences and examples are provided to demonstrate the concept, utility, and power of the proposed approach.
A Framework for Exploiting Task and DataParallelism on Distributed Memory Multicomputers
 IEEE Transactions on Parallel and Distributed Systems
, 1997
"... offer significant advantages over shared memory multiprocessors in terms of cost and scalability. Unfortunately, the utilization of all the available computational power in these machines involves a tremendous programming effort on the part of users, which creates a need for sophisticated compiler a ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
offer significant advantages over shared memory multiprocessors in terms of cost and scalability. Unfortunately, the utilization of all the available computational power in these machines involves a tremendous programming effort on the part of users, which creates a need for sophisticated compiler and runtime support for distributed memory machines. In this paper, we explore a new compiler optimization for regular scientific applicationsâ€“the simultaneous exploitation of task and data parallelism. Our optimization is implemented as part of the PARADIGM HPF compiler framework we have developed. The intuitive idea behind the optimization is the use of task parallelism to control the degree of data parallelism of individual tasks. The reason this provides increased performance is that data parallelism provides diminishing returns as the number of processors used is increased. By controlling the number of processors used for each data parallel task in an application and by concurrently executing these tasks, we make program execution more efficient and, therefore, faster. A practical implementation of a task and data parallel scheme of execution for an application on a distributed memory multicomputer also involves data redistribution. This data redistribution causes an overhead. However, as our experimental results show, this overhead is not a problem; execution of a program using task and data parallelism together can be significantly faster than its execution using data parallelism alone. This makes our proposed optimization practical and extremely useful.
An Efficient Approximation Algorithm for Minimizing Makespan on Uniformly Related Machines
 Journal of Algorithms
, 1999
"... We give a new efficient approximation algorithm for scheduling precedence constrained jobs on machines with different speeds. The problem is as follows. We are given n jobs to be scheduled on a set of m machines. Jobs have processing times and machines have speeds. It takes p j =s i units of time ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
We give a new efficient approximation algorithm for scheduling precedence constrained jobs on machines with different speeds. The problem is as follows. We are given n jobs to be scheduled on a set of m machines. Jobs have processing times and machines have speeds. It takes p j =s i units of time for machine i with speed s i to process job j with processing requirement p j . Precedence constraints between jobs are given in the form of a partial order. If j OE k, processing of k cannot start until j's execution is completed. The objective is to find a nonpreemptive schedule to minimize Cmax = max j C j , conventionally called the makespan of the schedule, where C j is the completion time of job j. Recently Chudak and Shmoys [2] gave an algorithm with an approximation ratio of O(log m) significantly improving the earlier ratio of O( m) due to Jaffe [6]. Their algorithm is based on solving a linear programming relaxation of the problem. Building on some of their ideas, we present a combinatorial algorithm that achieves a similar approximation ratio but runs in O(n ) time. In the process we also obtain a constant factor approximation algorithm for the special case of precedence constraints induced by a collection of chains. Our algorithm is based on a new lower bound which we believe is of independent interest. Using a result of Shmoys, Wein, and Williamson [10] our algorithm can be extended to obtain an O(log m) approximation ratio even if jobs have release dates.
Processor Allocation and Scheduling of Macro Dataflow Graphs on Distributed Memory Multicomputers by the PARADIGM Compiler
 In Proceedings of the 1993 International Conference on Parallel Processing, volume IISoftware
, 1993
"... : Functional or Control parallelism is an effective way to increase speedups in Multicomputers. Programs for these machines are represented by Macro Dataflow Graphs (MDGs) for the purpose of functional parallelism analysis and exploitation. Algorithms for allocation and scheduling of MDGs have been ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
: Functional or Control parallelism is an effective way to increase speedups in Multicomputers. Programs for these machines are represented by Macro Dataflow Graphs (MDGs) for the purpose of functional parallelism analysis and exploitation. Algorithms for allocation and scheduling of MDGs have been discussed along with some analysis of their optimality. These algorithms attempt to minimize the execution time of any given MDG through exploitation of functional parallelism. Our preliminary results show their effectiveness over naive algorithms. Keywords : Macro Dataflow Graphs, Distributed Memory Multicomputers, Allocation and Scheduling, Parallelizing Compilers, Optimization. 1 Introduction Distributed Memory Multicomputers offer significant advantages over shared memory multiprocessors in terms of cost and scalability. Unfortunately, writing efficient software for them is an extremely laborious process for users. The PARADIGM compiler project at Illinois is aimed at devising a paral...
A Framework for Exploiting Data and Functional Parallelism on Distributed Memory Multicomputers
, 1994
"... Recent research efforts have shown the benefits of integrating functional and data parallelism over using either pure data parallelism or pure functional parallelism. The work in this paper presents a theoretical framework for deciding on a good execution strategy for a given program based on the av ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Recent research efforts have shown the benefits of integrating functional and data parallelism over using either pure data parallelism or pure functional parallelism. The work in this paper presents a theoretical framework for deciding on a good execution strategy for a given program based on the available functional and data parallelism in the program. The framework is based on assumptions about the form of computation and communication cost functions for multicomputer systems. We present mathematical functions for these costs and show that these functions are realistic. The framework also requires specification of the available functional and data parallelism for a given problem. For this purpose, we have developed a graphical programming tool. Currently, we have tested our approach using three benchmark programs on the Thinking Machines CM5 and Intel Paragon. Results presented show that the approach is very effective and can provide a two to threefold increase in speedups over ap...
A Comprehensive Approach to Parallel Data Flow Analysis
 In Int. Conf. Supercomputing
, 1992
"... We present a comprehensive approach to performing data flow analysis in parallel. We identify three types of parallelism inherent in the data flow solution process: independentproblem parallelism, separateunit parallelism and algorithmic parallelism; and describe a unified framework to exploit the ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We present a comprehensive approach to performing data flow analysis in parallel. We identify three types of parallelism inherent in the data flow solution process: independentproblem parallelism, separateunit parallelism and algorithmic parallelism; and describe a unified framework to exploit them. Our investigations of typical Fortran programs reveal an abundance of the last two types of parallelism. In particular, we illustrate the exploitation of algorithmic parallelism in the design of our parallel hybrid data flow analysis algorithms. We report on the empirical performance of the parallel hybrid algorithm for the Reaching Definitions problem and the structural characteristics of the program flow graphs that affect algorithm performance. Keywords. Data flow analysis, parallel algorithms, parallel data flow analysis. 1 Introduction 1.1 Motivation Data flow analysis is a compiletime analysis technique that gathers information about the flow of data in the program. Data flow i...
Quorum placement in networks to minimize access delays
 In Proceedings of 24th Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2005
"... ..."
How "hard" Is Thread Partitioning and How "bad" Is a List Scheduling Based Partitioning Algorithm?
 In Proceedings of the 10th ACM Symposium on Parallel Algorithms and Architectures
, 1998
"... Adequate compiler support is essential to take advantage of the emerging multithreaded architecture. In this paper, we address two important questions in thread partitioning, which is a key step in compiler design for multithreaded architectures. The questions in which we are interested are: how "h ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Adequate compiler support is essential to take advantage of the emerging multithreaded architecture. In this paper, we address two important questions in thread partitioning, which is a key step in compiler design for multithreaded architectures. The questions in which we are interested are: how "hard" is it to partition threads and how "bad" will a heuristic partitioning algorithm be? We propose a cost model for both multithreaded machines and user programs, and we formulate the thread partition problem as an optimization problem. Then, we answer the above two questions by proving that: 1) for the class of programs and architecture models we are interested in, the problem of thread partition for minimum execution time is NPhard; 2) the run length produced by any list scheduling based thread partitioning algorithm is at most twice as long as that of an optimal solution. 1 Introduction Multithreaded architectures have been attracting increased attentions due to their ability of hidi...
QoS Adaptation In RealTime Systems
, 1999
"... QOS ADAPTATION IN REALTIME SYSTEMS by Tarek F. Abdelzaher Chair: Kang G. Shin We propose to design, implement, and evaluate a software framework, called the Adaptware, that consists of architectural support, resourcemanagement mechanisms, and programming abstractions for adapting QualityofSe ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
QOS ADAPTATION IN REALTIME SYSTEMS by Tarek F. Abdelzaher Chair: Kang G. Shin We propose to design, implement, and evaluate a software framework, called the Adaptware, that consists of architectural support, resourcemanagement mechanisms, and programming abstractions for adapting QualityofService (QoS) to dynamicallyfluctuating resource capacity and demands. This framework is to reduce the cost and time of realtime software development by providing the infrastructure necessary for building reusable multipurpose realtime software components. In much the same way as today's consumers can buy software and hardware components from different vendors and construct a computing environment tailored to their needs, the proposed framework will provide the means of building and integrating realtime system components so as to preserve their temporal correctness while making it possible to dynamically compute predictable endtoend temporal guarantees commensurate with available resour...