• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 215
Next 10 →

Coordinated Checkpoint from Message Payload in Pessimistic Sender-Based Message Logging

by Mehdi Aminian, Mohammad K. Akbari, Bahman Javadi
"... Execution of MPI applications on Clusters and Grid deployments suffers from node and network failure that motivates the use of fault tolerant MPI implementations. Two category techniques have been introduced to make these systems fault-tolerant. The first one is checkpoint-based technique and the ot ..."
Abstract - Add to MetaCart
Execution of MPI applications on Clusters and Grid deployments suffers from node and network failure that motivates the use of fault tolerant MPI implementations. Two category techniques have been introduced to make these systems fault-tolerant. The first one is checkpoint-based technique

Comparing Ethernet and Myrinet for MPI communication

by Supratik Majumder, Scott Rixner - in Proceedings of the 7th workshop on languages, compilers, and , 2004
"... This paper compares the performance of Myrinet and Eth-ernet as a communication substrate for MPI libraries. MPI library implementations for Myrinet utilize user-level com-munication protocols to provide low latency and high band-width MPI messaging. In contrast, MPI library impleme-nations for Ethe ..."
Abstract - Cited by 7 (0 self) - Add to MetaCart
This paper compares the performance of Myrinet and Eth-ernet as a communication substrate for MPI libraries. MPI library implementations for Myrinet utilize user-level com-munication protocols to provide low latency and high band-width MPI messaging. In contrast, MPI library impleme

MPICH-V Project: a Multiprotocol Automatic Fault Tolerant MPI

by Aurelien Bouteiller , Franck Cappello , Thomas Herault, Geraud Krawezik, Pierre Lemarinier , Frederic Magniette
"... High performance computing platforms like Clusters, Grid and Desktop Grids are becoming larger and subject to more frequent failures. MPI is one of the most used message passing library in HPC applications. These two trends raise the need for fault tolerant MPI. The MPICH-V project focuses on design ..."
Abstract - Cited by 20 (4 self) - Add to MetaCart
on designing, implementing and comparing several automatic fault tolerance protocols for MPI applications. We present an extensive related work section highlighting the originality of our approach and the proposed protocols. We present then four fault tolerant protocols implemented in a new generic framework

A Synchronous Mode MPI Implementation on the Cell BE ™ Architecture

by Murali Krishna, Arun Kumar, Naresh Jayam, Ganapathy Senthilkumar, Pallav K Baruah, Raghunath Sharma, Shakti Kapoor, Ashok Srinivasan
"... Abstract. The Cell Broadband Engine shows much promise in high performance computing applications. The Cell is a heterogeneous multi-core processor, with the bulk of the computational work load meant to be borne by eight co-processors called SPEs. Each SPE operates on a distinct 256 KB local store, ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
. This implementation views each SPE as a node for an MPI process, with the local store used as if it were a cache. In this paper, we describe synchronous mode communication in our implementation, using the rendezvous protocol, which makes MPI communication for long messages efficient. We further present experimental

RDMA read based rendezvous protocol for MPI over InfiniBand: design alternatives and benefits

by Sayantan Sur, Hyun-wook Jin, Lei Chai, Dhabaleswar K. Panda - In PPoPP ’06: Proceedings of the eleventh ACM SIGPLAN symposium on Principles , 2006
"... Message Passing Interface (MPI) is a popular parallel programming model for scientific applications. Most high-performance MPI implementations use Rendezvous Protocol for efficient transfer of large messages. This protocol can be designed using either RDMA Write or RDMA Read. Usually, this protocol ..."
Abstract - Cited by 21 (0 self) - Add to MetaCart
Message Passing Interface (MPI) is a popular parallel programming model for scientific applications. Most high-performance MPI implementations use Rendezvous Protocol for efficient transfer of large messages. This protocol can be designed using either RDMA Write or RDMA Read. Usually, this protocol

Improving RDMA-based MPI Eager Protocol for Frequently-used Buffers

by Mohammad J. Rashti, Ahmad Afsahi
"... MPI is the main standard for communication in high-performance clusters. MPI implementations use the Eager protocol to transfer small messages. To avoid the cost of memory registration and prenegotiation, the Eager protocol involves a data copy to intermediate buffers at both sender and receiver sid ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
MPI is the main standard for communication in high-performance clusters. MPI implementations use the Eager protocol to transfer small messages. To avoid the cost of memory registration and prenegotiation, the Eager protocol involves a data copy to intermediate buffers at both sender and receiver

Dyn-MPI: Supporting MPI on non dedicated clusters (extended version

by D. Brent Weatherly, David K. Lowenthal, Mario Nakazawa, Franklin Lowenthal , 2003
"... Distributing data is a fundamental problem in implementing efficient distributed-memory parallel programs. The problem becomes more difficult in environments where the participating nodes are not dedicated to a parallel application. We are investigating the data distribution problem in non dedicated ..."
Abstract - Cited by 5 (1 self) - Add to MetaCart
dedicated environments in the context of explicit message-passing programs. To address this problem, we have designed and implemented an extension to MPI called Dynamic MPI (Dyn-MPI). The key component of Dyn-MPI is its run-time system, which efficiently and automatically redistributes data on the fly when

Self-Consistent MPI Performance Guidelines

by Jesper Larsson Träff, William D. Gropp, Rajeev Thakur
"... Message passing using the Message Passing Interface (MPI) is at present the most widely adopted framework for programming parallel applications for distributed-memory and clustered parallel systems. For reasons of (universal) implementability, the MPI standard does not state any specific performance ..."
Abstract - Cited by 4 (2 self) - Add to MetaCart
Message passing using the Message Passing Interface (MPI) is at present the most widely adopted framework for programming parallel applications for distributed-memory and clustered parallel systems. For reasons of (universal) implementability, the MPI standard does not state any specific

Combining Coordinated and Uncoordinated Checkpoint in Pessimistic Sender-Based Message Logging

by Mehdi Aminian , Mohammad K Akbari , Bahman Javadi
"... Abstract Execution of MPI applications on Clusters and Grid deployments suffers from node and network failure that motivates the use of fault tolerant MPI implementations. Two category techniques have been introduced to make these systems fault-tolerant. The first one is checkpointbased technique a ..."
Abstract - Add to MetaCart
Abstract Execution of MPI applications on Clusters and Grid deployments suffers from node and network failure that motivates the use of fault tolerant MPI implementations. Two category techniques have been introduced to make these systems fault-tolerant. The first one is checkpointbased technique

A.: Efficient MPI Support for Advanced Hybrid Programming Models

by Torsten Hoefler, Greg Bronevetsky, Brian Barrett, Bronis R. De Supinski, Andrew Lumsdaine - In: EuroMPI’10. vol. LNCS 6305 , 2010
"... Abstract. The number of multithreaded Message Passing Interface (MPI) implementations and applications is increasing rapidly. We discuss how multithreaded applications can receive messages of unknown size. As is well known, combining MPI Probe/MPI Recv is not threadsafe, but many assume that trivial ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Abstract. The number of multithreaded Message Passing Interface (MPI) implementations and applications is increasing rapidly. We discuss how multithreaded applications can receive messages of unknown size. As is well known, combining MPI Probe/MPI Recv is not threadsafe, but many assume
Next 10 →
Results 11 - 20 of 215
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University