Results 1  10
of
14
Hypergraphbased Dynamic Load Balancing for Adaptive Scientific Computations
"... Adaptive scientific computations require that periodic repartitioning (load balancing) occur dynamically to maintain load balance. Hypergraph partitioning is a successful model for minimizing communication volume in scientific computations, and partitioning software for the static case is widely ava ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
(Show Context)
Adaptive scientific computations require that periodic repartitioning (load balancing) occur dynamically to maintain load balance. Hypergraph partitioning is a successful model for minimizing communication volume in scientific computations, and partitioning software for the static case is widely available. In this paper, we present a new hypergraph model for the dynamic case, where we minimize the sum of communication in the application plus the migration cost to move data, thereby reducing total execution time. The new model can be solved using hypergraph partitioning with fixed vertices. We describe an implementation of a parallel multilevel repartitioning algorithm within the Zoltan loadbalancing toolkit, which to our knowledge is the first code for dynamic load balancing based on hypergraph partitioning. Finally, we present experimental results that demonstrate the effectiveness of our approach on a Linux cluster with up to 64 processors. Our new algorithm compares favorably to the widely used ParMETIS partitioning software in terms of quality, and would have reduced total execution time in most of our test cases. ∗ Sandia is a multiprogram laboratory operated by Sandia Corporation,
Multilevel direct Kway hypergraph partitioning with multiple constraints and fixed vertices
, 2007
"... ..."
A Repartitioning Hypergraph Model for Dynamic Load Balancing
, 2008
"... In parallel adaptive applications, the computational structure of the applications changes over time, leading to load imbalances even though the initial load distributions were balanced. To restore balance and to keep communication volume low in further iterations of the applications, dynamic load b ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
In parallel adaptive applications, the computational structure of the applications changes over time, leading to load imbalances even though the initial load distributions were balanced. To restore balance and to keep communication volume low in further iterations of the applications, dynamic load balancing (repartitioning) of the changed computational structure is required. Repartitioning differs from static load balancing (partitioning) due to the additional requirement of minimizing migration cost to move data from an existing partition to a new partition. In this paper, we present a novel repartitioning hypergraph model for dynamic load balancing that accounts for both communication volume in the application and migration cost to move data, in order to minimize the overall cost. Use of a hypergraphbased model allows us to accurately model communication costs rather than approximating them with graphbased models. We show that the new model can be realized using hypergraph partitioning with fixed vertices and describe our parallel multilevel implementation within the Zoltan loadbalancing toolkit. To the best of our knowledge, this is the first implementation for dynamic load balancing based on hypergraph partitioning. To demonstrate the effectiveness of our approach, we conducted experiments on a Linux cluster with 1024 processors. The results show that, in terms of reducing total cost, our new model compares favorably to the graphbased dynamic load balancing approaches, and multilevel approaches improve the repartitioning quality significantly.
SiteBased Partitioning and Repartitioning Techniques for Parallel PageRank Computation
"... Abstract—The PageRank algorithm is an important component in effective web search. At the core of this algorithm are repeated sparse matrixvector multiplications where the involved web matrices grow in parallel with the growth of the web and are stored in a distributed manner due to space limitatio ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The PageRank algorithm is an important component in effective web search. At the core of this algorithm are repeated sparse matrixvector multiplications where the involved web matrices grow in parallel with the growth of the web and are stored in a distributed manner due to space limitations. Hence, the PageRank computation, which is frequently repeated, must be performed in parallel with highefficiency and lowpreprocessing overhead while considering the initial distributed nature of the web matrices. Our contributions in this work are twofold. We first investigate the application of stateoftheart sparse matrix partitioning models in order to attain high efficiency in parallel PageRank computations with a particular focus on reducing the preprocessing overhead they introduce. For this purpose, we evaluate two different compression schemes on the web matrix using the site information inherently available in links. Second, we consider the more realistic scenario of starting with an initially distributed data and extend our algorithms to cover the repartitioning of such data for efficient PageRank computation. We report performance results using our parallelization of a stateoftheart PageRank algorithm on two different PC clusters with 40 and 64 processors. Experiments show that the proposed techniques achieve considerably high speedups while incurring a preprocessing overhead of several iterations (for some instances even less than a single iteration) of the underlying sequential PageRank algorithm. Index Terms—PageRank, sparse matrixvector multiplication, web search, parallelization, sparse matrix partitioning, graph partitioning, hypergraph partitioning, repartitioning. Ç
Hypergraph partitioning through vertex separators on graphs
, 2010
"... The modeling flexibility provided by hypergraphs has drawn a lot of interest from the combinatorial scientific community, leading to novel models and algorithms, their applications, and development of associated tools. Hypergraphs are now a standard tool in combinatorial scientific computing. The ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
The modeling flexibility provided by hypergraphs has drawn a lot of interest from the combinatorial scientific community, leading to novel models and algorithms, their applications, and development of associated tools. Hypergraphs are now a standard tool in combinatorial scientific computing. The modeling flexibility of hypergraphs however, comes at a cost: algorithms on hypergraphs are inherently more complicated than those on graphs, which sometimes translate to nontrivial increases in processing times. Neither the modeling flexibility of hypergraphs, nor the runtime efficiency of graph algorithms can be overlooked. Therefore, the new research thrust should be how to cleverly tradeoff between the two. This work addresses one method for this tradeoff by solving the hypergraph partitioning problem by finding vertex separators on graphs. Specifically, we investigate how to solve the hypergraph partitioning problem by seeking a vertex separator on its net intersection graph (NIG), where each net of the hypergraph is represented by a vertex, and two vertices share an edge if their nets have a common vertex. We propose a vertexweighting scheme to attain good nodebalanced hypergraphs, since NIG model cannot preserve node balancing information. Vertexremoval and vertexsplitting techniques are described to optimize cutnet and connectivity metrics, respectively, under the recursive bipartitioning paradigm. We also developed an
Remapping models for scientific computing via graph and hypergraph partitioning
 In Proceedings of the SIAM Workshop on Combinatorial Scientific Computing (CSC07
, 2007
"... There are numerous parallel scientific computing applications in which the same computation is successively repeated over a problem instance for many times with different parameters. In most of these applications, although the initial tasktoprocessor mapping may be satisfactory in terms of both c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
There are numerous parallel scientific computing applications in which the same computation is successively repeated over a problem instance for many times with different parameters. In most of these applications, although the initial tasktoprocessor mapping may be satisfactory in terms of both computational load balance and communication requirements, the quality of this initial mapping typically tends to deteriorate as the computational structure of the application or its parameters start to change while the computation progresses, thus reducing the efficiency of parallelization. A solution to this problem is to rebalance the load distribution of the processors whenever needed by rearranging the assignment of tasks to processors via a process known as remapping. For an efficient parallelization, novel remapping models are needed. These models should not only rebalance the load distribution in the parallel system but also be able to minimize the possible overheads that may be introduced due to the remapping process. Although it heavily depends on the nature of the problem, most typical remapping overheads are incurred due to task migration, data replication, and the remapping computation itself. In the literature, various combinatorial models, based on graph partitioning (GP) and hypergraph partitioning (HP) are proposed as solutions to the remapping problems arising in different types of ap
A Model for Task Repartioning under Data Replication
"... Abstract. We propose a twophase model for solving the problem of task repartitioning under data replication with memory constraints. The hypergraphpartitioningbased model proposed for the first phase aims to minimize the total message volume that will be incurred due to the replication/migration ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We propose a twophase model for solving the problem of task repartitioning under data replication with memory constraints. The hypergraphpartitioningbased model proposed for the first phase aims to minimize the total message volume that will be incurred due to the replication/migration of input data while maintaining balance on computational and receivevolume loads of processors. The networkflowbased model proposed for the second phase aims to minimize the maximum message volume handled by processors via utilizing the flexibility in assigning sendcommunication tasks to processors, which is introduced by data replication. The validity of our proposed model is verified on parallelization of a direct volume rendering algorithm.
Contents lists available at ScienceDirect Information Sciences
"... journal homepage: www.elsevier.com/locate/ins Efficient successor retrieval operations for aggregate query processing ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/ins Efficient successor retrieval operations for aggregate query processing
PARTITIONING HYPERGRAPHS IN SCIENTIFIC COMPUTING APPLICATIONS THROUGH VERTEX SEPARATORS ON GRAPHS
"... Abstract. The modeling flexibility provided by hypergraphs has drawn a lot of interest from the combinatorial scientific community, leading to novel models and algorithms, their applications, and development of associated tools. Hypergraphs are now a standard tool in combinatorial scientific computi ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The modeling flexibility provided by hypergraphs has drawn a lot of interest from the combinatorial scientific community, leading to novel models and algorithms, their applications, and development of associated tools. Hypergraphs are now a standard tool in combinatorial scientific computing. The modeling flexibility of hypergraphs however, comes at a cost: algorithms on hypergraphs are inherently more complicated than those on graphs, which sometimes translates to nontrivial increases in processing times. Neither the modeling flexibility of hypergraphs, nor the runtime efficiency of graph algorithms can be overlooked. Therefore, the new research thrust should be how to cleverly tradeoff between the two. This work addresses one method for this tradeoff by solving the hypergraph partitioning problem by finding vertex separators on graphs. Specifically, we investigate how to solve the hypergraph partitioning problem by seeking a vertex separator on its net intersection graph (NIG), where each net of the hypergraph is represented by a vertex, and two vertices share an edge if their nets have a common vertex. We propose a vertexweighting scheme to attain good nodebalanced hypergraphs, since the NIG model cannot preserve node balancing information. Vertexremoval and vertexsplitting techniques are described to optimize cutnet and connectivity metrics, respectively, under the recursive bipartitioning paradigm. We also developed implementations of our proposed hypergraph partitioning formulations by adopting and modifying a stateoftheart graph partitioning by vertex separator tool onmetis. Experiments conducted on a large collection of sparse matrices demonstrate the effectiveness of our proposed techniques. Key words. hypergraph partitioning; combinatorial scientific computing; graph partitioning by vertex separator; sparse matrices. AMS subject classifications.