Results 1 - 10
of
15
Orca: A language for parallel programming of distributed systems
- IEEE Transactions on Software Engineering
, 1992
"... Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data ..."
Abstract
-
Cited by 332 (46 self)
- Add to MetaCart
(Show Context)
Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems programmers. This is reflected in its design goals to provide a simple, easy to use language that is type-secure and provides clean semantics. The paper discusses three example parallel applications in Orca, one of which is described in detail. It also describes one of the existing implementations, which is based on reliable broadcasting. Performance measurements of this system are given for three parallel applications. The measurements show that significant speedups can be obtained for all three applications. Finally, the paper compares Orca with several related languages and systems. 1.
An Architectural Overview Of The Alpha Real-Time Distributed Kernel
- In Proceedings of the USENIX Workshop on Microkernels and Other Kernel Architectures
, 1993
"... Alpha is a non-proprietary experimental operating system kernel which extends the realtime domain to encompass distributed applications, such as for telecommunications, factory automation, and defense. Distributed real-time systems are inherently asynchronous, dynamic, and non-deterministic, and ..."
Abstract
-
Cited by 51 (6 self)
- Add to MetaCart
Alpha is a non-proprietary experimental operating system kernel which extends the realtime domain to encompass distributed applications, such as for telecommunications, factory automation, and defense. Distributed real-time systems are inherently asynchronous, dynamic, and non-deterministic, and yet are nonetheless mission-critical.The increasing complexity and pace of these systems precludes the historical reliance solely on human operators for assuring system dependability under uncertainty. Traditional real-time OS technology is based on attempting to assert or impose determinism of not just the ends but also the means, for centralized low-level sampled-data monitoring and control, with an insufficiency of hardware resources. Conventional distributed OS technology is primarily based on two-party client/server hierarchies for explicit resource sharing in networks of autonomous users. These two technological paradigms are special cases which cannot be combined and scaled up...
BONITA: A set of tuple space primitives for distributed coordination
, 1997
"... In the last few years the use of distributed structured shared memory paradigms for coordination between parallel processes has become common. One of the most well known implementations of this paradigm is the shared tuple space model (as used in Linda). In this paper we describe a new set of primit ..."
Abstract
-
Cited by 45 (6 self)
- Add to MetaCart
In the last few years the use of distributed structured shared memory paradigms for coordination between parallel processes has become common. One of the most well known implementations of this paradigm is the shared tuple space model (as used in Linda). In this paper we describe a new set of primitives for fully distributed coordination of processes and agents using tuple spaces, called the Bonita primitives. The Linda primitives provide synchronous access to tuple spaces, whereas the Bonita primitives provide asynchronous access to tuple spaces. The proposed primitives are able to mimic the Linda primitives, therefore providing the ease of use and expressibility of Linda together with a number of advantages for the coordination of agents or processes in distributed environments. The primitives allow user processes to perform computation concurrently with tuple space accesses, and provide new coordination constructs which lead to more efficient programs. In this paper we present the ...
Experience with distributed programming in Orca
- in Proc. IEEE CS International Conference on Computer Languages
, 1990
"... Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, data types and create instances (objects) of these types, which may be shared among processes. All op ..."
Abstract
-
Cited by 43 (10 self)
- Add to MetaCart
Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, data types and create instances (objects) of these types, which may be shared among processes. All operations on shared objects are executed atomically. Orca’s shared objects are implemented by replicating them in the local memories of the proces-sors. Read operations use the local copies of the object, without doing any interprocess communication. Write operations update all copies using an efficient reliable broadcast protocol. In this paper, we briefly describe the language and its implementation and then report on our ex-periences in using Orca for three parallel applications: the Traveling Salesman Problem, the All-pairs Shortest Paths problem, and Successive Overrelaxation. These applications have different needs for shared data: TSP greatly benefits from the support for shared data; ASP benefits from the use of broad-cast communication, even though it is hidden in the implementation; SOR merely requires point-to-point communication, but still can be implemented in the language by simulating message passing.
Orca: A Language For Distributed Programming
- ACM SIGPLAN NOTICES
, 1990
"... We present a simple model of shared data-objects, which extends the abstract data type model to support distributed programming. Our model essentially provides shared address space semantics, rather than message passing semantics, without requiring physical shared memory to be present in the tar ..."
Abstract
-
Cited by 38 (0 self)
- Add to MetaCart
We present a simple model of shared data-objects, which extends the abstract data type model to support distributed programming. Our model essentially provides shared address space semantics, rather than message passing semantics, without requiring physical shared memory to be present in the target system. We also propose a new programming language, Orca, based on shared dataobjects. A compiler and three different run time systems for Orca exist, which have been in use for over a year now.
Replication Techniques For Speeding Up Parallel Applications On Distributed Systems
, 1992
"... This paper discusses the design choices involved in replicating objects and their effect on performance. Important issues are: how to maintain consistency among different copies of an object; how to implement changes to objects; and which strategy for object replication to use. We have implemented s ..."
Abstract
-
Cited by 34 (6 self)
- Add to MetaCart
This paper discusses the design choices involved in replicating objects and their effect on performance. Important issues are: how to maintain consistency among different copies of an object; how to implement changes to objects; and which strategy for object replication to use. We have implemented several options to determine which ones are most efficient.
A Comparative Study Of Five Parallel Programming Languages
- In Distributed Open Systems
, 1991
"... Many different paradigms for parallel programming exist, nearly each of which is employed in dozens of languages. Several researchers have tried to compare these languages and paradigms by examining the expressivity and flexibility of their constructs. Few attempts have been made, however, at practi ..."
Abstract
-
Cited by 21 (2 self)
- Add to MetaCart
Many different paradigms for parallel programming exist, nearly each of which is employed in dozens of languages. Several researchers have tried to compare these languages and paradigms by examining the expressivity and flexibility of their constructs. Few attempts have been made, however, at practical studies based on actual programming experience with multiple languages. Such a study is the topic of this paper. We will look at five parallel languages, all based on different paradigms. The languages are: SR (based on message passing), Emerald (concurrent objects), Parlog (parallel Horn clause logic), Linda (Tuple Space), and Orca (logically shared data). We have implemented the same parallel programs in each language, using real parallel machines. The paper reports on our experiences in implementing three frequently occurring communication patterns: message passing through a mailbox, one-to-many communication, and access to replicated shared data. 1. INTRODUCTION During the previous ...
A Model for Persistent Shared Memory Addressing in Distributed Systems
, 1992
"... COOL v2 1 is an object oriented persistent computing system for distributed programming. With COOL v2 , C++ objects can be persistent and shared freely between applications and distributed across sites in a completely transparent manner from the programmer`s point of view. To address the problem o ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
COOL v2 1 is an object oriented persistent computing system for distributed programming. With COOL v2 , C++ objects can be persistent and shared freely between applications and distributed across sites in a completely transparent manner from the programmer`s point of view. To address the problem of maintaining distributed shared data coherency, data persistency and address allocation coherency, we developed the persistent context space model which encapsulates distributed shared memory and persistent memory, and controls distributed shared memory address allocation. This paper outlines existing solutions of object addressing in persistent and distributed environments and contrasts these with the persistent context space model and its integration in an operating system architecture. 1 Introduction The use of the UMA 2 programming model on distributed systems has been investigated as a means to preserve programming simplicity and benefit at the same time from the increased paralleli...
Garbage Collection in Open Distributed Tuple Space Systems
- In Proc. 15 th Brazilian Computer Networks Symposium --- SBRC'97
, 1997
"... This paper demonstrates the need for garbage collection in multiple tuple space distributed open systems, which has Linda as a major icon, and identifies problems involved in incorporating garbage collection into such systems. We concern ourselves with open implementations as the existence of a garb ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
This paper demonstrates the need for garbage collection in multiple tuple space distributed open systems, which has Linda as a major icon, and identifies problems involved in incorporating garbage collection into such systems. We concern ourselves with open implementations as the existence of a garbage collector is essential in this environment. The extension of Linda to include multiple tuple spaces has introduced this new problem as processes are now able to create tuple spaces, spawn other processes into these tuple spaces, and store tuples (data) into these tuple spaces, but are unable to delete any of the objects (tuples, tuple spaces and processes) or even decide about their usefulness. In this paper we begin by showing that the main problem in introducing garbage collection into Linda is the lack of sufficient information about the effectiveness of Linda objects. We then describe techniques for maintaining a structure to be used by a garbage collection algorithm of tuple spaces....
Implementing Parallel Algorithms based on Prototype Evaluation and Transformation
- Department of Computer Science, University of Dortmund
, 1997
"... Combining parallel programming with prototyping is aimed at alleviating parallel programming by enabling the programmer to make practical experiments with ideas for parallel algorithms at a high level, neglecting low-level considerations of specific parallel architectures in the beginning of prog ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Combining parallel programming with prototyping is aimed at alleviating parallel programming by enabling the programmer to make practical experiments with ideas for parallel algorithms at a high level, neglecting low-level considerations of specific parallel architectures in the beginning of program development. Therefore, prototyping parallel algorithms is aimed at bridging the gap between conceptual design of parallel algorithms and practical implementation on specific parallel systems. The essential prototyping activities are programming, evaluation and transformation of prototypes. This paper gives a report on some experience with implementing parallel algorithms based on prototype evaluation and transformation employing the ProSet-Linda approach. 1 Introduction Parallel programming is conceptually harder to undertake and to understand than sequential programming, because a programmer often has to cope with the coexistence and coordination of multiple parallel activities....