Results 1 -
5 of
5
Orca: A language for parallel programming of distributed systems
- IEEE Transactions on Software Engineering
, 1992
"... Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data ..."
Abstract
-
Cited by 332 (46 self)
- Add to MetaCart
(Show Context)
Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems programmers. This is reflected in its design goals to provide a simple, easy to use language that is type-secure and provides clean semantics. The paper discusses three example parallel applications in Orca, one of which is described in detail. It also describes one of the existing implementations, which is based on reliable broadcasting. Performance measurements of this system are given for three parallel applications. The measurements show that significant speedups can be obtained for all three applications. Finally, the paper compares Orca with several related languages and systems. 1.
Models and Languages for Parallel Computation
- ACM COMPUTING SURVEYS
, 1998
"... We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architecture-independent, be easy to understand, guranatee performance, and provide info about the cost of programs. ... We consider programming models in ..."
Abstract
-
Cited by 168 (4 self)
- Add to MetaCart
(Show Context)
We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architecture-independent, be easy to understand, guranatee performance, and provide info about the cost of programs. ... We consider programming models in 6 categories, depending on the level of abstraction they provide.
Persistent Foundations for Scalable Multi-Paradigmal Systems
, 1992
"... Problems with the inconsistent behaviour of system construction components for building large and long-lived application systems are identified. They make the programmer's task harder and the user's world more confusing in the same way that the disharmonies between programming languages an ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
Problems with the inconsistent behaviour of system construction components for building large and long-lived application systems are identified. They make the programmer's task harder and the user's world more confusing in the same way that the disharmonies between programming languages and databases did. Persistent programming languages overcame those disharmonies. This paper challenges researchers to design and build a common substrate to the construction components. The construction components would be re-built using the substrate to achieve consistent behaviour. Application systems would then use this new family of construction components. The substrate, called the Scalable Persistent Foundation promises several advantages: consistent application system behaviour even when under stress, accelerated application system building and maintenance, genuine longevity of application systems and improved operational efficiency. The search for a design and implementation of this foundation w...
Distributed Software Engineering - Invited State-of-the-Art Report
"... The term "Distributed Software Engineering" is ambiguous 1 . It includes both the engineering of distributed software and the process of distributed development of software, such as cooperative work. This paper concentrates on the former, giving an indication of the special needs and rew ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
The term "Distributed Software Engineering" is ambiguous 1 . It includes both the engineering of distributed software and the process of distributed development of software, such as cooperative work. This paper concentrates on the former, giving an indication of the special needs and rewards in distributed computing. In essence, we argue that the structure of these systems as interacting components is a blessing which forces software engineers towards compositional techniques which offer the best hope for constructing scalable and evolvable systems in an incremental manner. We offer some guidance and recommendations as to the approaches which seem most appropriate, particularly in languages for distributed programming, specification and analysis techniques for modelling and distributed paradigms for guiding design. 1. Introduction Distributed processing provides the most general, flexible and promising approach for the provision of computer processing. Interconnected workstations a...
Supporting State-Sensitive Computation in a Dataflow System
"... One well-known construct for managing shared resources in a parallel system is Hoare's monitors[4], which encapsulate the shared data, operations on it, and synchronization between operations. This paper describes managers , a version of monitors for the declarative language Id[8]. Like monitor ..."
Abstract
- Add to MetaCart
One well-known construct for managing shared resources in a parallel system is Hoare's monitors[4], which encapsulate the shared data, operations on it, and synchronization between operations. This paper describes managers , a version of monitors for the declarative language Id[8]. Like monitors, managers provide encapsulation; however, managers make two key improvements critical to parallel execution. First, operations on a shared resource have internal concurrency. This concurrency allows difficult locking issues encountered in monitors, such as nested monitors, recursive monitors, and the precise semantics of wait and signal, to be resolved programmatically without losing abstraction. Second, the implementation uses low overhead, non-busy-waiting locks for mutual exclusion. This low overhead increases the availability of shared resources, and encourages composite manager structures which reduce bottlenecks. The construct is described in detail, and our experience with applications ...