Results 1 - 10
of
263
The anatomy of the Grid: Enabling scalable virtual organizations.
- The International Journal of High Performance Computing Applications
, 2001
"... Abstract "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In this article, we define this new field. First, ..."
Abstract
-
Cited by 2673 (86 self)
- Add to MetaCart
(Show Context)
Abstract "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In this article, we define this new field. First, we review the "Grid problem," which we define as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources-what we refer to as virtual organizations. In such settings, we encounter unique authentication, authorization, resource access, resource discovery, and other challenges. It is this class of problem that is addressed by Grid technologies. Next, we present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. We describe requirements that we believe any such mechanisms must satisfy and we discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, we discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. We maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
The physiology of the grid: An open grid services architecture for distributed systems integration
, 2002
"... In both e-business and e-science, we often need to integrate services across distributed, heterogeneous, dynamic “virtual organizations ” formed from the disparate resources within a single enterprise and/or from external resource sharing and service provider relationships. This integration can be t ..."
Abstract
-
Cited by 1377 (33 self)
- Add to MetaCart
(Show Context)
In both e-business and e-science, we often need to integrate services across distributed, heterogeneous, dynamic “virtual organizations ” formed from the disparate resources within a single enterprise and/or from external resource sharing and service provider relationships. This integration can be technically challenging because of the need to achieve various qualities of service when running on top of different native platforms. We present an Open Grid Services Architecture that addresses these challenges. Building on concepts and technologies from the Grid and Web services communities, this architecture defines a uniform exposed service semantics (the Grid service); defines standard mechanisms for creating, naming, and discovering transient Grid service instances; provides location transparency and multiple protocol bindings for service instances; and supports integration with underlying native platform facilities. The Open Grid Services Architecture also defines, in terms of Web Services Description Language (WSDL) interfaces and associated conventions, mechanisms required for creating and composing sophisticated distributed systems, including lifetime management, change management, and notification. Service bindings can support reliable invocation, authentication, authorization, and delegation, if required. Our presentation complements an earlier foundational article, “The Anatomy of the Grid, ” by describing how Grid mechanisms can implement a service-oriented architecture, explaining how Grid functionality can be incorporated into a Web services framework, and illustrating how our architecture can be applied within commercial computing as a basis for distributed system integration—within and across organizational domains. This is a DRAFT document and continues to be revised. The latest version can be found at
The programming model of ASSIST, an environment for parallel and distributed portable applications
, 2002
"... A software development system based uponin0/----E30 skeleton technology (ASSIST)i a proposal of a new programmi/ enviammi/ oriiam to the development of parallel and di0 tri0C/3 hiC/3C/0:/EEC90 applii0:/E accordi to auniCfl approach. The mai goals are: hi:0CCP--C programmabiQQ9 and softwareproductiC/ ..."
Abstract
-
Cited by 100 (20 self)
- Add to MetaCart
A software development system based uponin0/----E30 skeleton technology (ASSIST)i a proposal of a new programmi/ enviammi/ oriiam to the development of parallel and di0 tri0C/3 hiC/3C/0:/EEC90 applii0:/E accordi to auniCfl approach. The mai goals are: hi:0CCP--C programmabiQQ9 and softwareproductiC/Q for complexmulti0PC/PE0:/33 appli0 tipli ipli0P data-iPC/PE0: andi0/fl3C90:/ software; performance portabince across dioss ent platforms,i partims,0 large-scale platforms andgri/Q e#ecti-- reuse of parallel software; e#ciwa evoluti; ofappli30:/CC through versih0 that scaleaccordi9 to theunderlyi: technologiCC The purpose ofthi paperi to show the pri99fl30: of the proposed approachi terms of the programmifl model(successi0 paperswie deal wil theenvi39--0:/ ivi39--0:/flfl# andwiE performance evaluatice0 The features and thecharacteri###3 of the ASSIST programmiC model aredescri3fl accordifl to anoperati:flQ semanti style andusiP examples todrifl the presentati3Q to show the expressi: power and todiP--#-- the research iesear Accordih to ourpreviC9 experi90: i structured parallelprogrammi3C i ASSIST we wi0 to overcome some li0fl3--PE0: of theclassi--fl skeletog approach toi0P--Q/ generalifl and flexi/0:flPP expressiP power ande#ci/CE fori0/3--Qfl0: dynami andi0P--39--0:fl appli39 tipli as well as for complexcombi0flEQ3C of task and dataparalleli:fl A newparadi#/ www.elsevi r.com/locate/parco Parallel Computi0 28 (2002) 1709--1732 qThi work has been supported by the Italian Space Agency: ASI-PQE2000 Project on "Development of Earth Observatis Applit0flP ns by means of Systems and Tools forHiPfl--P--0:Q# anceComputiQ0:Q and by theNatio3fl ResearchCoearc :Agenzi 2000 Project on "DevelopmentEnviPC#3 nt forMulti3E0:3 rm and Multi/0: uageHi0fl9fl--Q0 rmanceAppli/flQ-- ns, Based up...
A Component Architecture for LAM/MPI
- In Proceedings, 10th European PVM/MPI Users’ Group Meeting, number 2840 in Lecture Notes in Computer Science
, 2003
"... Abstract. To better manage the ever increasing complexity of LAM/MPI, we have created a lightweight component architecture for it that is specifically designed for high-performance message passing. This paper describes the basic design of the component architecture, as well as some of the particular ..."
Abstract
-
Cited by 98 (12 self)
- Add to MetaCart
(Show Context)
Abstract. To better manage the ever increasing complexity of LAM/MPI, we have created a lightweight component architecture for it that is specifically designed for high-performance message passing. This paper describes the basic design of the component architecture, as well as some of the particular component instances that constitute the latest release of LAM/MPI. Performance comparisons against the previous, monolithic, version of LAM/MPI show no performance impact due to the new architecture—in fact, the newest version is slightly faster. The modular and extensible nature of this implementation is intended to make it significantly easier to add new functionality and to conduct new research using LAM/MPI as a development platform. 1
Uintah: A Massively Parallel Problem Solving Environment
, 2000
"... This paper describes Uintah, a component-based visual problem solving environment (PSE) that is designed to specifically address the unique problems of massively parallel computation on terascale computing platforms. Uintah supports the entire life cycle of scientic applications by allowing scientif ..."
Abstract
-
Cited by 81 (16 self)
- Add to MetaCart
This paper describes Uintah, a component-based visual problem solving environment (PSE) that is designed to specifically address the unique problems of massively parallel computation on terascale computing platforms. Uintah supports the entire life cycle of scientic applications by allowing scientific programmers to quickly and easily develop new techniques, debug new implementations, and apply known algorithms to solve novel problems. Uintah is built on three principles: 1) As much as possible, the complexities of parallel execution should be handled for the scientist, 2) software should be reusable at the component level, and 3) scientists should be able to dynamically steer and visualize their simulation results as the simulation executes. To provide this functionality, Uintah builds upon the best features of the SCIRun PSE and the DoE Common Component Architecture (CCA).
Event Services for High Performance Computing
- In Proceedings of High Performance Distributed Computing (HPDC
, 2000
"... The Internet and the Grid are changing the face of high performance computing. Rather than tightly-coupled SPMD-style components running in a single cluster, on a parallel machine, or even on the Internet programmed in MPI, applications are evolving into sets of collaborating elements scattered acro ..."
Abstract
-
Cited by 68 (41 self)
- Add to MetaCart
(Show Context)
The Internet and the Grid are changing the face of high performance computing. Rather than tightly-coupled SPMD-style components running in a single cluster, on a parallel machine, or even on the Internet programmed in MPI, applications are evolving into sets of collaborating elements scattered across diverse computational elements. These collaborating components may run on different operating systems and hardware platforms and may be written by different organizations in different languages. Complete "applications" are constructed by assembling these components in a plug-and-play fashion. This new vision for high performance computing demands features and characteristics not easily provided by traditional highperformance communications middleware. In response to these needs, we have developed ECho, a high-performance event-delivery middleware that meets the new demands of the Grid environment. ECho provides efficient binary transmission of event data with unique features that support data-type discovery and enterprise-scale application evolution. We present measurements detailing ECho's performance to show that ECho significantly outperforms other systems intended to provide this functionality and provides throughput and latency comparable to the most efficient middleware infrastructures available.
ICENI: Optimisation of Component Applications within a Grid Environment
- Journal of Parallel Computing
"... Eective exploitation of Computational Grids can only be achieved when applica-tions are fully integrated with the Grid middleware and the underlying computa-tional resources. Fundamental to this exploitation is information. Information about the structure and behaviour of the application, the capabi ..."
Abstract
-
Cited by 67 (8 self)
- Add to MetaCart
(Show Context)
Eective exploitation of Computational Grids can only be achieved when applica-tions are fully integrated with the Grid middleware and the underlying computa-tional resources. Fundamental to this exploitation is information. Information about the structure and behaviour of the application, the capability of the computational and networking resources, and the availability and access to these resources by an individual, a group or an organisation. In this paper we describe ICENI (Imperial College e-Science Networked Infras-tructure), a Grid middleware framework developed within the London e-Science Centre. ICENI is a platform independent framework that uses open and extensible XML derived protocols, within a framework built using Java and Jini, to explore eective application execution upon distributed federated resources. We match a high-level application specication, dened as a network of components, to an opti-mal combination of the currently available component implementations within our Grid environment, by utilising a system of composite performance modelling. We demonstrate the eectiveness of this architecture through high-level specication and solution of a set of linear equations by automatic and optimal resource and implementation selection. 1
A Component Architecture for High-Performance Scientific Computing
- Intl. J. High-Performance Computing Applications
, 2004
"... The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaborat ..."
Abstract
-
Cited by 63 (20 self)
- Add to MetaCart
The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components
Efficient Wire Formats for High Performance Computing
- IN PROCEEDINGS OF SUPERCOMPUTING 2000
, 2000
"... ... in non-traditional circumstances where it must interoperate with other applications. For example, online visualization is being used to monitor the progress of applications, and real-world sensors are used as inputs to simulations. Whenever these situations arise, there is a question of what com ..."
Abstract
-
Cited by 59 (26 self)
- Add to MetaCart
... in non-traditional circumstances where it must interoperate with other applications. For example, online visualization is being used to monitor the progress of applications, and real-world sensors are used as inputs to simulations. Whenever these situations arise, there is a question of what communications infrastructure should be used to link the different components. Traditional HPC-style communications systems such as MPI offer relatively high performance, but are poorly suited for developing these less tightly-coupled cooperating applications. Object-based systems and meta-data formats like XML offer substantial plug-and-play flexibility, but with substantially lower performance. We observe that the flexibility and baseline performance of all these systems is strongly determined by their wire format', or how they represent data for transmission in a heterogeneous environment. We examine the performance implications of different wire formats and present an alternative with significant advantages in terms of both performance and flexibility.
The CCA core specification in a distributed memory SPMD framework. Concurrency and Computation: Practice and Experience
"... We present an overview of the CCA core specification and CCAFFEINE, a Sandia National Laboratories framework implementation compliant with the draft specification. CCAFFEINE stands for CCA Fast Framework Example In Need of Everything; that is, CCAFFEINE is fast, lightweight, and it aims to provide e ..."
Abstract
-
Cited by 59 (14 self)
- Add to MetaCart
(Show Context)
We present an overview of the CCA core specification and CCAFFEINE, a Sandia National Laboratories framework implementation compliant with the draft specification. CCAFFEINE stands for CCA Fast Framework Example In Need of Everything; that is, CCAFFEINE is fast, lightweight, and it aims to provide every "framework service " by using external, portable components instead of integrating all services into a single, heavy framework core. By fast, we mean that the CCAFFEINE glue does not get between components in a way that slows down their interactions. We present the CCAFFEINE solutions to several fundamental problems in the application of component software approaches to the construction of SPMD applications. We demonstrate the integration of components from three