Results 1 - 10
of
12
Orca: A language for parallel programming of distributed systems
- IEEE Transactions on Software Engineering
, 1992
"... Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data ..."
Abstract
-
Cited by 332 (46 self)
- Add to MetaCart
(Show Context)
Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems programmers. This is reflected in its design goals to provide a simple, easy to use language that is type-secure and provides clean semantics. The paper discusses three example parallel applications in Orca, one of which is described in detail. It also describes one of the existing implementations, which is based on reliable broadcasting. Performance measurements of this system are given for three parallel applications. The measurements show that significant speedups can be obtained for all three applications. Finally, the paper compares Orca with several related languages and systems. 1.
Distributed programming with shared data
- Computer Languages
, 1988
"... Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed s ..."
Abstract
-
Cited by 84 (16 self)
- Add to MetaCart
Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed systems and their languages, however, has led to several new methodologies that blur this simple distinction. Operating system primitives (e.g., problem-oriented shared memory, Shared Virtual Memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared variable paradigm without the presence of physical shared memory. In this paper we will look at the reasons for this evolution, the resemblances and differences among these new proposals, and the key issues in their design and implementation. It turns out that many implementations are based on replication of data. We take this idea one step further, and discuss how automatic replication (initiated by the run time system) can be used as a basis for a new model, called the shared data-object model, whose semantics are similar to the shared variable model. Finally, we discuss the design of a new language for distributed programming, Orca, based on the shared data-object model. 1.
Orca: A Language For Distributed Programming
- ACM SIGPLAN NOTICES
, 1990
"... We present a simple model of shared data-objects, which extends the abstract data type model to support distributed programming. Our model essentially provides shared address space semantics, rather than message passing semantics, without requiring physical shared memory to be present in the tar ..."
Abstract
-
Cited by 38 (0 self)
- Add to MetaCart
We present a simple model of shared data-objects, which extends the abstract data type model to support distributed programming. Our model essentially provides shared address space semantics, rather than message passing semantics, without requiring physical shared memory to be present in the target system. We also propose a new programming language, Orca, based on shared dataobjects. A compiler and three different run time systems for Orca exist, which have been in use for over a year now.
Replication Techniques For Speeding Up Parallel Applications On Distributed Systems
, 1992
"... This paper discusses the design choices involved in replicating objects and their effect on performance. Important issues are: how to maintain consistency among different copies of an object; how to implement changes to objects; and which strategy for object replication to use. We have implemented s ..."
Abstract
-
Cited by 34 (6 self)
- Add to MetaCart
This paper discusses the design choices involved in replicating objects and their effect on performance. Important issues are: how to maintain consistency among different copies of an object; how to implement changes to objects; and which strategy for object replication to use. We have implemented several options to determine which ones are most efficient.
A Decentralized Naming Facility
, 1986
"... A key component in distributed computer systems is the naming facility: the means by which highlevel names are bound to objects and by which objects are located given only their names. We describe the design, implementation, and performance of a decentralized naming facility, in which the global nam ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
A key component in distributed computer systems is the naming facility: the means by which highlevel names are bound to objects and by which objects are located given only their names. We describe the design, implementation, and performance of a decentralized naming facility, in which the global name space and name mapping mechanism are implemented by a set of cooperating peers, with no central authority. Decentralization is shown to lend increased extensibility and reliability to the design. Efficiency in name mapping is achieved through specialized caching techniques. Categories and Subject Descriptors: C.2.4 [Computer Systems Organization]: Distributed Systems; D.4.3 [Operating Systems]: File Systems Management---directory structures, distributed file systems; D.4.7 [Operating Systems]: Organization and Design. General Terms: Design, experimentation, measurement, performance, reliability Additional Key Words and Phrases: Naming, distributed system, fault tolerance, cache Authors' a...
An Architecture Supporting Loose and Close Cooperation of Distributed Autonomous Systems
"... There is a general trend in designing distributed con-trol systems to give an increasing amount of autonomy to the individual nodes of such systems. Autonomous nodes interact only loosely. But also, close cooperation under hard real-time constraints is required in certain situa-tions. This paper ana ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
There is a general trend in designing distributed con-trol systems to give an increasing amount of autonomy to the individual nodes of such systems. Autonomous nodes interact only loosely. But also, close cooperation under hard real-time constraints is required in certain situa-tions. This paper analyzes approaches for structuring distributed control systems and presents an architecture integrating object-oriented frameworks, pub-lisherhbscriber communication, and hard- and sof-real-time communication. For supporting mobile robots, wireless communication is considered in particular. 1.
Exporting a User Interface to Memory Management from a Communication-Oriented Operating System
"... interpreted as representing the official policies, either expressed or implied, of North Atlantic ..."
Abstract
- Add to MetaCart
interpreted as representing the official policies, either expressed or implied, of North Atlantic
The Design of Synchronization Mechanisms for Peer-to-Peer
"... Massively multiplayer online games (MMG) are natural applications for peer-to-peer overlays (P2P). However, the absence of synchronization among the peers significantly limits games ’ functionality and quality of service. Synchronization in P2P overlays faces problems including scalability, heteroge ..."
Abstract
- Add to MetaCart
(Show Context)
Massively multiplayer online games (MMG) are natural applications for peer-to-peer overlays (P2P). However, the absence of synchronization among the peers significantly limits games ’ functionality and quality of service. Synchronization in P2P overlays faces problems including scalability, heterogeneous network latency and bandwidth, and non-collaborative participants. We believe these problems can be solved in the context of games, by relaxing the consistency requirements based on application semantics. This paper discusses two synchronization mechanisms that can potentially both achieve scalability in the P2P environment and observe real time requirements in massively multiplayer games. 1
Distribution Unlimited
, 1981
"... This cbm;FM examines a number of advanced military information processing problems that entail computational tasks distributed over space. Communicational restrictions and other factors in these applications make it appropriate to consider networks of loosely coupled distributed artificial-intellige ..."
Abstract
- Add to MetaCart
(Show Context)
This cbm;FM examines a number of advanced military information processing problems that entail computational tasks distributed over space. Communicational restrictions and other factors in these applications make it appropriate to consider networks of loosely coupled distributed artificial-intelligence (DAI) systems. Among the important conceptual difficulties of designing such networks is the problem of representing and using information about what one part of the network "believes " about another part. We consider in some detail various aspects of this problem and briefly describe some potential solutions.. I DISTRIBUTED PROBLEMS Recent applications of artificial-intelligence (AI) techniques have involved tasks that were relatively localized in space. Some prominent examples of such applications may be found in factory automation, photo interpretation, intellig.nt database access, expert consulting systems,