Results 1 - 10
of
20
Orca: A language for parallel programming of distributed systems
- IEEE Transactions on Software Engineering
, 1992
"... Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data ..."
Abstract
-
Cited by 332 (46 self)
- Add to MetaCart
(Show Context)
Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems programmers. This is reflected in its design goals to provide a simple, easy to use language that is type-secure and provides clean semantics. The paper discusses three example parallel applications in Orca, one of which is described in detail. It also describes one of the existing implementations, which is based on reliable broadcasting. Performance measurements of this system are given for three parallel applications. The measurements show that significant speedups can be obtained for all three applications. Finally, the paper compares Orca with several related languages and systems. 1.
Analysis of Inheritance Anomaly in Object-Oriented Concurrent Programming Languages
, 1993
"... It has been pointed out that inheritance and synchronization constraints in concurrent object systems often conflict with each other, resulting in inheritance anomaly where re-definitions of inherited methods are necessary in order to maintain the integrity of concurrent objects. The anomaly is seri ..."
Abstract
-
Cited by 180 (2 self)
- Add to MetaCart
It has been pointed out that inheritance and synchronization constraints in concurrent object systems often conflict with each other, resulting in inheritance anomaly where re-definitions of inherited methods are necessary in order to maintain the integrity of concurrent objects. The anomaly is serious, as it could nullify the benefits of inheritance altogether. Several proposals have been made for resolving the anomaly; however, we argue that those proposals suffer from the incompleteness which allows room for counterexamples. We give an overview and the analysis of inheritance anomaly, and review several proposals for minimizing the unwanted effect of this phenomenon. In particular, we propose (partial) solutions using (1) computational reflection, and (2) transactions in OOCP languages. 1 Introduction Inheritance is the prime language feature in sequential OO (Object-Oriented) languages, and is especially important for code re-use. Another important feature is concurrency; although...
Distributed programming with shared data
- Computer Languages
, 1988
"... Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed s ..."
Abstract
-
Cited by 84 (16 self)
- Add to MetaCart
Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed systems and their languages, however, has led to several new methodologies that blur this simple distinction. Operating system primitives (e.g., problem-oriented shared memory, Shared Virtual Memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared variable paradigm without the presence of physical shared memory. In this paper we will look at the reasons for this evolution, the resemblances and differences among these new proposals, and the key issues in their design and implementation. It turns out that many implementations are based on replication of data. We take this idea one step further, and discuss how automatic replication (initiated by the run time system) can be used as a basis for a new model, called the shared data-object model, whose semantics are similar to the shared variable model. Finally, we discuss the design of a new language for distributed programming, Orca, based on the shared data-object model. 1.
pSather: Layered Extensions to an Object-Oriented Language for Efficient Parallel Computation
, 1993
"... pSather is a parallel extension of the existing object-oriented language Sather. It offers a shared-memory programming model which integrates both control- and dataparallel extensions. This integration increases the flexibility of the language to express different algorithms and data structures, esp ..."
Abstract
-
Cited by 33 (3 self)
- Add to MetaCart
(Show Context)
pSather is a parallel extension of the existing object-oriented language Sather. It offers a shared-memory programming model which integrates both control- and dataparallel extensions. This integration increases the flexibility of the language to express different algorithms and data structures, especially on distributed-memory machines (e.g. CM-5). This report describes our design objectives and the programming language pSather in detail. ICSI and Eidgenossische Technische Hochschule (ETH), Zurich, Switzerland. E-mail: murer@icsi.berkeley.edu. y ICSI and Computer Science Division, U.C. Berkeley. E-mail: jfeldman@icsi.berkeley.edu. z ICSI and Computer Science Division, U.C. Berkeley. E-mail: clim@icsi.berkeley.edu. x ICSI E-mail: mseidel@icsi.berkeley.edu. ii Contents 1 Introduction 4 1.1 Roadmap of this Report : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 1.2 Grammar Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6 2...
A comparison of two paradigms for distributed shared memory
- SOFTWARE– PRACTICE AND EXPERIENCE
, 1992
"... ..."
PANDA - Supporting Distributed Programming in C++
- Proc. of ECOOP’93, LNCS
, 1993
"... : PANDA is a run-time package based on a very small operating system kernel which supports distributed applications written in C++. It provides powerful abstractions such as very efficient user-level threads, a uniform global address space, object and thread mobility, garbage collection, and persist ..."
Abstract
-
Cited by 23 (1 self)
- Add to MetaCart
: PANDA is a run-time package based on a very small operating system kernel which supports distributed applications written in C++. It provides powerful abstractions such as very efficient user-level threads, a uniform global address space, object and thread mobility, garbage collection, and persistent objects. The paper discusses the design rationales underlying the PANDA system. The fundamental features of PANDA are surveyed, and their implementation in the current prototype environment is outlined. 1. Introduction Systems for parallel and distributed object-oriented programming can be classified into two basic categories. Firstly, there is a variety of programming languages developed especially to serve experimental purposes. Different object models for parallel and distributed programming can be investigated by designing and working with such systems. Some examples of languages in this area are Emerald [Jul el al. 88], Pool [America and van der Linden 90], Sloop [Lucco 87], and Or...
PRELUDE: A System for Portable Parallel Software
, 1991
"... In this paper we describe PRELUDE, a programming language and accompanying system support for writing portable MIMD parallel programs. PRELUDE supports a methodology for designing and orga. nizing parallel programs that makes them easier to tune for particular architectures and to port to new archit ..."
Abstract
-
Cited by 21 (9 self)
- Add to MetaCart
In this paper we describe PRELUDE, a programming language and accompanying system support for writing portable MIMD parallel programs. PRELUDE supports a methodology for designing and orga. nizing parallel programs that makes them easier to tune for particular architectures and to port to new architectures. It builds on earlier work on Emerald, Amber, and vaxious Fortran extensions to allow the programmer to divide programs into architecture-dependent and architecture-independent parts, and then to change the architecture-dependent parts to port the program to a new machine or to tune its performance on a single machine. The architecture-dependent parts of a program are specified by annotations that describe the mapping of a program onto a machine. PRELUDE provides a variety of mapping mechanisms similar to those in other systems, including remote procedure call, object migration, and data replication and partitioning. In addition, PRELUDE includes novel migration mechanisms for computations based on a form of continuation passing. The implementation of object migration in PRELUDE uses a novel approach based on fixup blocks that is more efficient than previous approaches, and amortizes the cost of each migration so that the cost per migration drops as the frequency of mi- grations increases.
Language Features for Re-use and Extensibility in Concurrent Object-Oriented Programming Languages
, 1993
"... ..."
A system for building scalable parallel applications
, 1992
"... One of the major problems with programming scalable multicomputer systems is defining appropriate abstractions for the programmer. In order to allow applications to scale their resource consumption according to run-time conditions, we propose a view of a scalable system which treats memory as a coll ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
One of the major problems with programming scalable multicomputer systems is defining appropriate abstractions for the programmer. In order to allow applications to scale their resource consumption according to run-time conditions, we propose a view of a scalable system which treats memory as a collection of programmer-defined and-extensible data structures. These structures may be transparently distributed across the nodes of the system while still presenting a single-entity abstraction to the programmer. The implementation of such structures, as a kernel for a programming environment, are presented, along with examples of their use. 1.
Remote Method Calling and Object-Group Communication
- ECOOP Workshop on Object-based Distributed Programming
, 1993
"... Many of today's object-oriented distributed toolkits focus on transactions to synchronize distributed applications. Transaction mechanisms are well suited for synchronizing data-oriented applications. In contrast, process-oriented applications like various kinds of failure tolerant client-serve ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Many of today's object-oriented distributed toolkits focus on transactions to synchronize distributed applications. Transaction mechanisms are well suited for synchronizing data-oriented applications. In contrast, process-oriented applications like various kinds of failure tolerant client-server applications involving object-replication, require a much finer degree of synchronization, realizable by virtual synchrony, atomic broadcast, and message passing mechanisms. Toolkits offering these capabilities, for example Isis or Horus, are low-level and therefore difficult to program. In this position paper we present objectoriented abstractions based on virtual synchrony and atomic broadcast primitives to ease the development of failure-resilient client-server applications. Examples of how such abstractions are being realized in a prototype toolkit called Electra illustrate this paper. Keywords: Failure Tolerance, Replicated Objects, Object-Group Communication, Object Oriented Distributed ...