Results 1  10
of
22
BandwidthCentric Allocation of Independent Tasks on Heterogeneous Platforms
 In International Parallel and Distributed Processing Symposium (IPDPS’2002). IEEE Computer
, 2001
"... In this paper, we consider the problem of allocating a large number of independent, equalsized tasks to a heterogenerous "grid" computing platform. Such problems arise in collaborative computing eorts like SETI@home. We use a tree to model a grid, where resources can have dierent speeds of comput ..."
Abstract

Cited by 76 (28 self)
 Add to MetaCart
In this paper, we consider the problem of allocating a large number of independent, equalsized tasks to a heterogenerous "grid" computing platform. Such problems arise in collaborative computing eorts like SETI@home. We use a tree to model a grid, where resources can have dierent speeds of computation and communication, as well as dierent overlap capabilities. We dene a base model, and show how to determine the maximum steadystate throughput of a node in the base model, assuming we already know the throughput of the subtrees rooted at the node's children. Thus, a bottomup traversal of the tree determines the rate at which tasks can be processed in the full tree. The best allocation is bandwidthcentric: if enough bandwidth is available, then all nodes are kept busy; if bandwidth is limited, then tasks should be allocated only to the children which have suciently small communication times, regardless of their computation power. We then show how nodes with other capabilities ones that allow more or less overlapping of computation and communication than the base model can be transformed to equivalent nodes in the base model. We also show how to handle a more general communication model. Finally, we present simulation results of several demanddriven task allocation policies that show that our bandwidthcentric method obtains better results than allocating tasks to all processors on a rstcome, rst serve basis. Key words: heterogeneous computer, allocation, scheduling, grid, metacomputing. Corresponding author: Jeanne Ferrante The work of Larry Carter and Jeanne Ferrante was performed while visiting LIP. 1 1
Data partitioning with a realistic performance model of networks of heterogeneous computers
 In International Parallel and Distributed Processing Symposium IPDPS’2004. IEEE Computer
, 2004
"... The paper presents a performance model that can be used to optimally schedule arbitrary tasks on a network of heterogeneous computers when there is an upper bound on the size of the task that can be solved by each computer. We formulate a problem of partitioning of an nelement set over p heterogene ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
The paper presents a performance model that can be used to optimally schedule arbitrary tasks on a network of heterogeneous computers when there is an upper bound on the size of the task that can be solved by each computer. We formulate a problem of partitioning of an nelement set over p heterogeneous processors using this advanced performance model and give its efficient solution of the complexity O(p 3 ×log 2 n).
Adaptive parallel computing on heterogeneous networks with mpC
 Parallel Computing
, 2002
"... The paper presents a new advanced version of the mpC parallel language. The language was designed specially for programming highperformance parallel computations on heterogeneous networks of computers. The advanced version allows the programmer to define at runtime all the main features of the unde ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
The paper presents a new advanced version of the mpC parallel language. The language was designed specially for programming highperformance parallel computations on heterogeneous networks of computers. The advanced version allows the programmer to define at runtime all the main features of the underlying parallel algorithm, which have an impact on the application execution performance. The mpC programming system uses this information along with the information about the performance of the executing network to map the processes of the parallel program to this network so as to achieve better execution time.
Partitioning a Square into Rectangles: NPCompleteness and Approximation Algorithms
 Algorithmica
, 2000
"... In this paper, we deal with two geometric problems arising from heterogeneous parallel computing: how to partition the unit square into p rectangles of given area s 1 ; s 2 ; : : : ; s p (such s i = 1), so as to minimize (i) either the sum of the p perimeters of the rectangles (ii) or the large ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
In this paper, we deal with two geometric problems arising from heterogeneous parallel computing: how to partition the unit square into p rectangles of given area s 1 ; s 2 ; : : : ; s p (such s i = 1), so as to minimize (i) either the sum of the p perimeters of the rectangles (ii) or the largest perimeter of the p rectangles. For both problems, we prove NPcompleteness and we introduce approximation algorithms.
LoadBalancing Iterative Computations on Heterogeneous Clusters
"... We focus on mapping iterative algorithms onto heterogeneous clusters. The application data is partitioned over the processors, which are arranged along a virtual ring. At each iteration, independent calculations are carried out in parallel, and some communications take place between consecutive p ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We focus on mapping iterative algorithms onto heterogeneous clusters. The application data is partitioned over the processors, which are arranged along a virtual ring. At each iteration, independent calculations are carried out in parallel, and some communications take place between consecutive processors in the ring. The question is to determine how to slice the application data into chunks, and assign these chunks to the processors, so that the total execution time is minimized. A major
Data Redistribution Algorithms For Heterogeneous Processor Rings
, 2004
"... We consider the problem of redistributing data on homogeneous and heterogeneous ring of processors. The problem arises in several applications, each time after that a loadbalancing mechanism is invoked (but we do not discuss the loadbalancing mechanism itself). We provide algorithms that aim at op ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
We consider the problem of redistributing data on homogeneous and heterogeneous ring of processors. The problem arises in several applications, each time after that a loadbalancing mechanism is invoked (but we do not discuss the loadbalancing mechanism itself). We provide algorithms that aim at optimizing the data redistribution, both for unidirectional and bidirectional rings, and we give complete proofs of correctness. One major contribution of the paper is that we are able to prove the optimality of the proposed algorithms in all cases except that of a bidirectional heterogeneous ring, for which the problem remains open.
MatrixProduct on Heterogeneous MasterWorker Platforms
"... This paper is focused on designing efficient parallel matrixproduct algorithms for heterogeneous masterworker platforms. While matrixproduct is wellunderstood for homogeneous 2Darrays of processors (e.g., Cannon algorithm and ScaLAPACK outer product algorithm), there are three key hypotheses th ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
This paper is focused on designing efficient parallel matrixproduct algorithms for heterogeneous masterworker platforms. While matrixproduct is wellunderstood for homogeneous 2Darrays of processors (e.g., Cannon algorithm and ScaLAPACK outer product algorithm), there are three key hypotheses that render our work original and innovative: Centralized data. We assume that all matrix files originate from, and must be returned to, the master. The master distributes data and computations to the workers (while in ScaLAPACK, input and output matrices are supposed to be equally distributed among participating resources beforehand). Typically, our approach is useful in the context of speeding up MATLAB or SCILAB clients running on a server (which acts as the master and initial repository of files). Heterogeneous starshaped platforms. We target fully heterogeneous platforms, where computational resources have different computing powers. Also, the workers are connected to the master by links of different capacities. This framework is realistic when deploying the application from the server, which is responsible for enrolling authorized resources. Limited memory. As we investigate the parallelization of large problems, we cannot assume that full matrix column blocks can be stored in the worker memories and be reused for subsequent updates (as in ScaLAPACK). We have devised efficient algorithms for resource selection (deciding which workers to enroll) and communication ordering (both for input and result messages), and we report a set of numerical experiments on a platform at our site. The experiments show that our matrixproduct algorithm has smaller execution times than existing ones, while it also uses fewer resources.
Wrekavoc: a Tool for Emulating Heterogeneity
"... Computer science and especially heterogeneous distributed computing is an experimental science. Simulation, emulation, or insitu implementation are complementary methodologies to conduct experiments in this context. In this paper we address the problem of defining and controlling the heterogeneity ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Computer science and especially heterogeneous distributed computing is an experimental science. Simulation, emulation, or insitu implementation are complementary methodologies to conduct experiments in this context. In this paper we address the problem of defining and controlling the heterogeneity of a platform. We evaluate the proposed solution, called Wrekavoc, with microbenchmark and by implementing algorithms of the literature. 1.
An Overview of Heterogeneous High Performance and Grid Computing
 In Engineering the Grid
, 2006
"... Abstract. This paper is an overview the ongoing academic research, development, and uses of heterogeneous parallel and distributed computing. This work is placed in the context of scientific computing. The simulation of very large systems often requires computational capabilities which cannot be sat ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. This paper is an overview the ongoing academic research, development, and uses of heterogeneous parallel and distributed computing. This work is placed in the context of scientific computing. The simulation of very large systems often requires computational capabilities which cannot be satisfied by a single processing system. A possible way to solve this problem is to couple different computational resources, perhaps distributed geographically. Heterogeneous distributed computing is a means to overcome the limitations of single computing systems.
A GeneralPurpose Model for Heterogeneous Computation
, 2000
"... Heterogeneous computing environments are becoming an increasingly popular platform for executing parallel applications. Such environments consist of a diverse set of machines and offer considerably more computational power at a lower cost than a parallel computer. Efficient heterogeneous parallel ap ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Heterogeneous computing environments are becoming an increasingly popular platform for executing parallel applications. Such environments consist of a diverse set of machines and offer considerably more computational power at a lower cost than a parallel computer. Efficient heterogeneous parallel applications must account for the differences inherent in such an environment. For example, faster machines should possess more data items than their slower counterparts and communication should be minimized over slow network links. Current parallel applications are not designed with such heterogeneity in mind. Thus, a new approach is necessary for designing efficient heterogeneous parallel programs. We propose