• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 407
Next 10 →

High performance MPI-2 one-sided communication over InfiniBand

by Weihang Jiang, Jiuxing Liu, Hyun-wook Jin, Dhabaleswar K. Panda - In Proceedings of 4th IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGrid , 2004
"... Many existing MPI-2 one-sided communication implementations are built on top of MPI send/receive operations. Although this approach can achieve good portability, it suffers from high communication overhead and dependency on remote process for communication progress. To address these problems, we pro ..."
Abstract - Cited by 16 (4 self) - Add to MetaCart
propose a high performance MPI-2 onesided communication design over the InfiniBand Architecture. In our design, MPI-2 one-sided communication operations such as MPI Put, MPI Get and MPI Accumulate are directly mapped to InfiniBand Remote Direct Memory Access (RDMA) operations. Our design has been

Scheduling of MPI-2 One Sided Operations over InfiniBand

by Wei Huang, Gopalakrishnan Santhanaraman, Hyun-wook Jin, Dhabaleswar K. Panda - In CAC 05 (in conjunction with IPDPS 05
"... MPI-2 provides interfaces for one sided communication, which is becoming increasingly important in scientific applications. MPI-2 semantics provide the flexibility to reorder the one sided operations within an access epoch. Based on this flexibility, in this paper we try to improve the performance o ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
MPI-2 provides interfaces for one sided communication, which is becoming increasingly important in scientific applications. MPI-2 semantics provide the flexibility to reorder the one sided operations within an access epoch. Based on this flexibility, in this paper we try to improve the performance

Design of High Performance MVAPICH2: MPI2 over InfiniBand ∗

by unknown authors
"... MPICH2 provides a layered architecture for implementing MPI-2. In this paper, we provide a new design for implementing MPI-2 over InfiniBand by extending the MPICH2 ADI3 layer. Our new design aims to achieve high performance by providing a multi-communication method framework that can utilize approp ..."
Abstract - Add to MetaCart
MPICH2 provides a layered architecture for implementing MPI-2. In this paper, we provide a new design for implementing MPI-2 over InfiniBand by extending the MPICH2 ADI3 layer. Our new design aims to achieve high performance by providing a multi-communication method framework that can utilize

Design and Implementation of Key Proposed MPI-3 One-Sided Communication Semantics on InfiniBand

by Sreeram Potluri, Sayantan Sur, Devendar Bureddy, Dhabaleswar K. P
"... Abstract. Simultaneous use of powerful system components is important for applications to achieve maximum performance on modern clusters. MPI-2 had introduced onesided communication model that enables for better communication and computation overlap. However, studies have shown limitations of this m ..."
Abstract - Add to MetaCart
of some of the key one-sided semantics proposed for MPI-3 over InfiniBand, using the MVAPICH2 library. 1

MPI over InfiniBand: Early Experiences

by Jiuxing Liu, Jiesheng Wu, Sushmitha P. Kini, Darius Buntinas, Weikuan Yu, Balasubraman Chandrasekaran, Ranjit M. Noronha, Pete Wyckoff, Dhabaleswar K. Panda , 2003
"... Recently, InfiniBand Architecture (IBA) has been proposed as the next generation interconnect for I/O and inter-process communication. The main idea behind this industry standard is to use a scalable switched fabric to design the next generation clusters and servers with high performance and scalabi ..."
Abstract - Cited by 19 (7 self) - Add to MetaCart
Recently, InfiniBand Architecture (IBA) has been proposed as the next generation interconnect for I/O and inter-process communication. The main idea behind this industry standard is to use a scalable switched fabric to design the next generation clusters and servers with high performance

Supporting MPI-2 One Sided Communication on Multi-Rail InfiniBand Clusters: Design Challenges and Performance Benefits ⋆

by Abhinav Vishnu, Gopal Santhanaraman, Wei Huang, Hyun-wook Jin, Dhabaleswar K. P
"... Abstract. In cluster computing, InfiniBand has emerged as a popular high performance interconnect with MPI as the de facto programming model. However, even with InfiniBand, bandwidth can become a bottleneck for clusters executing communication intensive applications. Multi-rail cluster configuration ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
configurations with MPI-1 are being proposed to alleviate this problem. Recently, MPI-2 with support for one-sided communication is gaining significance. In this paper, we take the challenge of designing high performance MPI-2 one-sided communication on multi-rail InfiniBand clusters. We propose a unified MPI-2

High Performance Implementation of MPI Derived Datatype Communication over InfiniBand

by Jiesheng Wu, Pete Wyckoff, Dhabaleswar Panda , 2004
"... In this paper, a systematic study of two main types of approach for MPI datatype communication (Pack/Unpack- based approaches and Copy-Reduced approaches)iscar- ried out on the InfiniBand network. We focus on overlapping packing, network communication, and unpacking in the Pack/Unpack-based approac ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
based on one MPI implementation over InfiniBand. Performance results of a vector microbenchmark demonstrate that latency is improved by a factor of up to 3.4 and bandwidth by a factor of up to 3.6 compared to the current datatype communication implementation. Collective operations like MPI Alltoall

High Performance RDMA-Based MPI Implementation over InfiniBand

by Jiuxing Liu, Jiesheng Wu, Dhabaleswar K. Panda - In 17th Annual ACM International Conference on Supercomputing (ICS ’03 , 2003
"... Although InfiniBand Architecture is relatively new in the high performance computing area, it o#ers many features which help us to improve the performance of communication subsystems. One of these features is Remote Direct Memory Access (RDMA) operations. In this paper, we propose a new design of MP ..."
Abstract - Cited by 126 (31 self) - Add to MetaCart
Although InfiniBand Architecture is relatively new in the high performance computing area, it o#ers many features which help us to improve the performance of communication subsystems. One of these features is Remote Direct Memory Access (RDMA) operations. In this paper, we propose a new design

Design Alternatives for Implementing Fence Synchronization in MPI-2 One-sided Communication for InfiniBand Clusters ∗

by G. Santhanaraman, S. Narravula, A. Mamidala
"... Scientific computing has seen an immense growth in recent years. The Message Passing Interface (MPI) has become the de-facto standard for parallel programming model for distributed memory systems. As the system scale increases, application writers often try to increase the overlap of computation and ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
) capabilities. We propose a novel design for implementing fence synchronization that uses RDMA write with Immediate mechanism (Fence-Imm-RI) provided by InfiniBand networks. We then characterize the performance of different designs with various one-sided communication pattern microbenchmarks for both latency

Analysis of Implementation Options for MPI-2 One-Sided

by Brian W. Barrett, Galen M. Shipman, Andrew Lumsdaine - In Proceedings, Euro PVM/MPI , 2007
"... Abstract. The Message Passing Interface provides an interface for onesided communication as part of the MPI-2 standard. The semantics specified by MPI-2 allow for a number of different implementation avenues, each with different performance characteristics. Within the context of Open MPI, a freely a ..."
Abstract - Cited by 8 (0 self) - Add to MetaCart
Abstract. The Message Passing Interface provides an interface for onesided communication as part of the MPI-2 standard. The semantics specified by MPI-2 allow for a number of different implementation avenues, each with different performance characteristics. Within the context of Open MPI, a freely
Next 10 →
Results 1 - 10 of 407
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University