• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

MPI: A Message-Passing Interface Standard, (1994)

by MPI Forum
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 410
Next 10 →

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

by Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein , 2010
"... ..."
Abstract - Cited by 1001 (20 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...convenient to think of ADMM as a message-passing algorithm on a graph, where each node corresponds to a subsystem and the edges correspond to shared variables. 9.2 MPI Message Passing Interface (MPI) =-=[For09]-=- is a language-independent message-passing specification used for parallel algorithms, and is the most widely used model for high-performance parallel computing today. Implementations of MPI are avail...

The Network Weather Service: A Distributed Resource Performance Forecasting Service for Metacomputing

by Rich Wolski, Neil T. Spring, Jim Hayes - Journal of Future Generation Computing Systems , 1999
"... ..."
Abstract - Cited by 761 (48 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...1 C API The programming interface provided to applications is intended to be lightweight and easily integrated into applications written for systems such as Legion [18], Globus [12], Condor [27], MPI =-=[11]-=-, and PVM [17]. Two functions make up this lightweight interface and separate the two phases of a forecaster connection, InitForecaster() and RequestForecasts(). The InitForecaster() function opens a ...

Cloud Computing and Grid Computing 360-Degree Compared

by Ian Foster, Yong Zhao, Ioan Raicu, Shiyong Lu , 2008
"... Cloud Computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for Cloud Computing and there seems to be no consensus on what a Cloud is. On the other hand, Cloud Computing is not a completely new concept; it has intricate connection to the relatively ..."
Abstract - Cited by 248 (9 self) - Add to MetaCart
Cloud Computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for Cloud Computing and there seems to be no consensus on what a Cloud is. On the other hand, Cloud Computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established Grid Computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast Cloud Computing with Grid Computing from various angles and give insights into the essential characteristics of both.
(Show Context)

Citation Context

...s, and programs also need to finish correctly, sosreliability and fault tolerance must be considered.sWe briefly discuss here some general programming models insGrids. MPI (Message Passing Interface) =-=[36]-=- is the mostscommonly used programming model in parallel computing, inswhich a set of tasks use their own local memory duringscomputation and communicate by sending and receivingsmessages. MPICH-G2 [3...

GRID RESOURCE MANAGEMENT -- State of the art and future trends

by Jarek Nabrzyski, Jennifer M. Schopf, Jan Weglarz
"... ..."
Abstract - Cited by 88 (0 self) - Add to MetaCart
Abstract not found

High-Performance Parallel Programming in Java: Exploiting Native Libraries

by Vladimir Getov, Susan Flynn-Hummel, Sava Mintchev , 1998
"... With most of today's fast scientific software written in Fortran and C, Java has a lot of catching up to do. In this paper we discuss how new Java programs can capitalize on high-performance libraries for other languages. With the help of a tool we have automatically created Java bindings for s ..."
Abstract - Cited by 73 (3 self) - Add to MetaCart
With most of today's fast scientific software written in Fortran and C, Java has a lot of catching up to do. In this paper we discuss how new Java programs can capitalize on high-performance libraries for other languages. With the help of a tool we have automatically created Java bindings for several standard libraries: MPI, BLAS, BLACS, PBLAS, ScaLAPACK. Performance results are presented for Java versions of two benchmarks from the NPB and PARKBENCH suites on an IBM SP2 distributed memory machine using JDK and IBM's high-performance Java compiler. The results confirm that fast parallel computing in Java is indeed possible.

Optimizing bandwidth limited problems using one-sided communication and overlap

by Christian Bell , Dan Bonachea , Rajesh Nishtala , Katherine Yelick - In 20th International Parallel and Distributed Processing Symposium (IPDPS , 2006
"... Abstract ..."
Abstract - Cited by 60 (16 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...st in part, fundamental to the two-sided model. We use Berkeley UPC [6] and MPI v1.1 [39] as representatives of the one and two-sided communication models, respectively. Although the MPI 2.0 standard =-=[38]-=- adds a one-sided communication interface, this interface has several semantic limitations that hinder its use in practice [9], and therefore we do not consider it further in this paper. Instead, we u...

GridDB: A Data-Centric Overlay for Scientific Grids

by David T. Liu, Michael J. Franklin , 2004
"... We present GridDB, a data-centric overlay for scientific grid data analysis. In contrast to currently deployed process-centric middleware, GridDB manages data entities rather than processes. GridDB provides a suite of services important to data analysis: a declarative interface, type-checking, ..."
Abstract - Cited by 52 (1 self) - Add to MetaCart
We present GridDB, a data-centric overlay for scientific grid data analysis. In contrast to currently deployed process-centric middleware, GridDB manages data entities rather than processes. GridDB provides a suite of services important to data analysis: a declarative interface, type-checking, interactive query processing, and memoization. We discuss several elements of GridDB: workflow/data model, query language, software architecture and query processing; and a prototype implementation. We validate GridDB by showing its modeling of real-world physics and astronomy analyses, and measurements on our prototype.

Scalable Work Stealing ∗

by James Dinan, D. Brian Larkins, Sriram Krishnamoorthy, Jarek Nieplocha
"... Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. Scalable dynamic load balancing on large clusters is a challe ..."
Abstract - Cited by 48 (3 self) - Add to MetaCart
Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. Scalable dynamic load balancing on large clusters is a challenging problem which can be addressed with distributed dynamic load balancing systems. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.
(Show Context)

Citation Context

...s on the PGAS programming model provided by ARMCI, the Aggregate Remote Memory Copy Interface [29]. ARMCI gives the benefit of interoperability with multiple parallel programming models including MPI =-=[28]-=-, the industry standard message passing interface, and the Global Arrays toolkit [30] which provides a PGAS model for distributed shared multidimensional arrays. ARMCI is a portable, low level PGAS li...

PARDIS: A Parallel Approach to CORBA

by Katarzyna Keahey, Dennis Gannon - In 6th IEEE International Symposium on High Performance Distributed Computation , 1997
"... This paper describes PARDIS, a system carrying explicit support for interoperability of PARallel DIStributed applications. PARDIS is closely based on the Common Object Request Broker Architecture (CORBA) [OMG95]. Like CORBA, it provides interoperability between heterogeneous components by specifying ..."
Abstract - Cited by 47 (10 self) - Add to MetaCart
This paper describes PARDIS, a system carrying explicit support for interoperability of PARallel DIStributed applications. PARDIS is closely based on the Common Object Request Broker Architecture (CORBA) [OMG95]. Like CORBA, it provides interoperability between heterogeneous components by specifying their interfaces in a meta-language, the CORBA IDL, which can be translated into the language of interacting components, also providing interaction in a distributed domain. In order to provide support for interacting parallel applications, PARDIS extends the CORBA object model by a notion of an SPMD object. SPMD objects allow the request broker to interact directly with the distributed resources of a parallel application. To support distributed argument transfer, PARDIS introduces the notion of a distributed sequence --- a generalization of a CORBA sequence representing distributed data structures of parallel applications. In this report we will give a brief description of basic component i...
(Show Context)

Citation Context

...mpiler-generated stubs. To date only one runtime system interface has been specified; it encompasses the functionality of message-passing libraries and has been tested using applications based on MPI =-=[7]-=- and the Tulip [2] run-time system. In the future PARDIS will provide an alternative run-time system interface capturing the functionality of the more flexible one-sided run-time systems. 3. Two Metho...

Globalized Newton–Krylov–Schwarz algorithms and software for parallel implicit CFD

by David Keyes, Lois Curfman Mcinnes, M. D. Tidriri - Int. J. High Perform. Comput. Appl
"... Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz ( Y NKS) algorithmic ..."
Abstract - Cited by 46 (18 self) - Add to MetaCart
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz ( Y NKS) algorithmic framework is presented as a widely applicable answer. This article shows that for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Y NKS can simultaneously deliver globalized, asymptotically rapid convergence through adaptive pseudo-transient continuation and Newton’s method; reasonable parallelizability for an implicit method through deferred synchronization and favorable communi-cation-to-computation scaling in the Krylov linear solver; and high per processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Y NKS methods are their sensitivity to the coding of the un-derlying PDE discretization and the large number of pa-rameters that must be selected to govern convergence. The authors therefore distill several recommendations from their experience and reading of the literature on vari-ous algorithmic components of Y NKS, and they describe a freely available MPI-based portable parallel software im-plementation of the solver employed here. 1
(Show Context)

Citation Context

...to early synchronization is to divide an operation into two parts: an initiation and a completion (or ending) phase. For example, asynchronous I/O uses this approach. The MPI message-passing standard =-=[56, 28]-=- provides asynchronous operations; send and receive operations are divided into starting (e.g., MPI Isend or MPI Irecv) and completion (e.g., MPI Wait) phases. PETSc takes the same multiphased approac...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University