DMCA
Orca: A language for parallel programming of distributed systems (1992)
Cached
Download Links
- [www.cs.vu.nl]
- [www.cs.vu.nl]
- [ftp.lip6.fr]
- [www.blivdatalog.dk]
- [www.diku.dk]
- [dare.ubvu.vu.nl]
- DBLP
Other Repositories/Bibliography
Venue: | IEEE Transactions on Software Engineering |
Citations: | 332 - 46 self |
Citations
1059 | B.J.: Implementing remote procedure calls.
- Birrell, Nelson
- 1984
(Show Context)
Citation Context ...ow to implement this broadcast primitive on top of LANs that only support unreliable broadcast. We will briefly compare this system with another implementation of Orca that uses Remote Procedure Call =-=[8]-=- rather than broadcasting. In Section 5, we will give performance measurements for several applications. In Section 6, we will compare our approach with those of related languages and systems. Finally... |
957 | Memory coherence in shared virtual memory systems,”ACMTransactions onComputer Systems,
- Li, Hudak
- 1989
(Show Context)
Citation Context ...ssors on which they run do not have physical shared memory. The main novelty of our approach is the way access to shared data is expressed. Unlike shared physical memory (or distributed shared memory =-=[6]-=-), shared data in Orca are accessed through user-defined high-level operations, which, as we will see, has many important implications. Supporting shared data on a distributed system imposes some chal... |
567 |
The notion of consistency and predicate locks in database systems.
- Eswaran, Gary, et al.
- 1976
(Show Context)
Citation Context ...visibly. Conceptually, each operation locks the entire object it is applied to, does the work, and releases the lock only when it is finished. To be more precise, the model guarantees serializability =-=[21]-=- of operation invocations: if two operations are applied simultaneously to the same data-object, then the result is as if one of them is executed before the other; the order ofsinvocation, however, is... |
566 | Monitors: An Operating System Structuring Concept.
- Hoare
- 1974
(Show Context)
Citation Context ... shared data-objects. By replicating objects, access control to shared objects is decentralized, which decreases access costs and increases parallelism. This is a major difference with, say, monitors =-=[19]-=-, which centralize control to shared data. 2.4. Synchronization An abstract data type in Orca can be used for creating shared as well as local objects. For objects that are shared among multiple proce... |
546 | Reliable communication in the presence of failures
- Birman, Joseph
- 1987
(Show Context)
Citation Context ...int packet from the sender to the sequencer and one broadcast packet from the sequencer to everyone. A comparison between our protocol and other well known protocols (e.g., those of Birman and Joseph =-=[28]-=-, Garcia-Molina and Spauster [29], and several others) is given in [30].s- 24 - 4.4. Comparison with an RPC-based Protocol Above, we have described one implementation of Orca, based on full replicatio... |
545 | Fine-grained mobility in the emerald system.
- Jul, Levy, et al.
- 1988
(Show Context)
Citation Context .... The operations allowed on Tuple Space are low-level and built-in, which, as we will argue later, complicates programming and makes an efficient distributed implementation hard. The Emerald language =-=[15]-=- is related to the DSM class, in that it provides a shared name space for objects, together with a location-transparent invocation mechanism. Emerald does not use any of the replication techniques tha... |
468 | Ethernet: Distributed packet switching for local computer networks
- Metcalfe, Boggs
- 1976
(Show Context)
Citation Context ...ed with the broadcast protocol described earlier. The implementation runs on a distributed system, containing 16 MC68030 CPUs (running at 16 Mhz) connected to each other through an 10 Mbit/s Ethernet =-=[31]-=-. The implementation uses Ethernet multicast communication to broadcast a message to a group of processors. All processors are on one Ethernet and are connected to it by Lance chip interfaces. The per... |
388 |
Reliable Broadcast Protocols.” In
- Chang, Maxemchuk
- 1984
(Show Context)
Citation Context ...d because Method 1 steals less computing cycles from the Orca application to handle interrupts. In philosophy, the protocol described above somewhat resembles the one described by Chang and Maxemchuk =-=[27]-=-, but they differ in some major aspects. With our protocol, messages can be delivered to the user as soon as one (special) node has acknowledged the message. In addition, fewer control messages are ne... |
300 | Munin: Distributed Shared Memory Based on Type-Specific Memory Coherence,”
- Bennett, Carter, et al.
- 1990
(Show Context)
Citation Context ... Orca programmers should not have to worry about consistency. (In the future, we may investigate whether a compiler is able to relax the consistency transparently, much as is done in the Munin system =-=[39]-=-.) A second important difference between Orca and SVM is the granularity of the shared data. In SVM, the granularity is the page-size, which is fixed (e.g. 4K). In Orca, the granularity is the object,... |
269 |
Distributed Programming in Argus
- Liskov
- 1988
(Show Context)
Citation Context ...r in a fork statement are shared. All other objects are local and are treated as normal variables of an abstract data type. Most other languages use different mechanisms for these two purposes. Argus =-=[16]-=-, for example, uses clusters for local data and guardians for shared data; clusters and guardians are completely different. SR [17] provides a single mechanism (resources), but the overhead of operati... |
235 | Programming languages for distributed computing systems. - Bal, Steiner, et al. - 1989 |
231 | The amber system: Parallel programming on a network of multiprocessors. In
- Chase, Amador, et al.
- 1989
(Show Context)
Citation Context ...t-based languages), Linda’s Tuple Space, and Shared Virtual Memory.sObjects - 28 - Objects are used in many object-based languages for parallel or distributed programming, such as Emerald [15], Amber =-=[34]-=-, and ALPS [35]. Objects in such languages typically have two parts: 1. Encapsulated data. 2. A manager process that controls access to the data. The data are accessed by sending a message to the mana... |
229 | Experiences with the amoeba distributed operating system,
- Tannenbaum, Renesse, et al.
- 1990
(Show Context)
Citation Context ...become more and more attractive for running parallel applications. In the Amoeba system, for example, the cost of sending a short message between Sun workstations over an Ethernet is 1.1 milliseconds =-=[1]-=-. Although this is still slower than communication in most multicomputers (e.g., hypercubes and transputer grids), it is fast enough for many coarse-grained parallel applications. In return, distribut... |
228 |
A shared virtual memory system for parallel computing,
- Li
- 1988
(Show Context)
Citation Context ...ilable. Applications described in the literature include: a distributed speech recognition system [10]; linear equation solving, three-dimensional partial differential equations, and split-merge sort =-=[11]-=-; computer chess [12]; distributed system services (e.g., name service, time service), global scheduling, and replicated files [13]. So, the difficulty in providing (logically) shared data makes messa... |
178 | Concepts and Notations for Concurrent Programming
- Andrews, Schneider
- 1983
(Show Context)
Citation Context ... objects. For objects that are shared among multiple processes, the issue of synchronization arises. Two types of synchronization exist: mutual exclusion synchronization and condition synchronization =-=[20]-=-. We will look at them in turn. Mutual exclusion synchronization Mutual exclusion in our model is done implicitly, by executing all operations on objects indivisibly. Conceptually, each operation lock... |
158 | Group communication in the Amoeba distributed operating system - KAASHOEK, TANENBAUM - 1991 |
153 |
An assessment of the programming language PASCAL. In:
- Wirth
- 1975
(Show Context)
Citation Context ... unique name for it, chosen by the run time system. The run time system also automatically allocates memory for the new node. In this sense, addnode is similar to the standard procedure new in Pascal =-=[24]-=-. As a crucial difference between the two primitives, however, the addnode construct specifies the data structure for which the new block of memory is intended. Unlike in Pascal, the run time system o... |
149 | An efficient Reliable Broadcast Protocol,'
- Kaashoek, Tanenbaum, et al.
- 1989
(Show Context)
Citation Context ... to two processors is 2.6 msec. With 16 receivers, a multicast takes 2.7 msec. 4 This high performance is due to the fact thats¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡s4 In an earlier implementation of the protocol =-=[32]-=- the delay was 1.4 msec. The difference is entirely due to a new routing protocol on which the group communication protocol is implemented. (The Amoeba kernel can now deal with different kinds of netw... |
93 | An overview of the SR language and implementation. - Andrews, Olsson, et al. - 1988 |
84 | Distributed programming with shared data
- Bal, Tanenbaum
- 1988
(Show Context)
Citation Context ... a sequential base language have many disadvantages, making them complicated for applications programmers to use. Since then, we have developed a new language for distributed programming, called Orca =-=[3, 4, 5]-=-. Orca is intended for distributed applications programming rather than systems programming, and is therefore designed to be a simple, expressive, and efficient language with clean semantics. Below, w... |
79 |
Linda and friends.
- Ahuja, Carriero, et al.
- 1986
(Show Context)
Citation Context ...o be replicated. SVM provides a clean, simple model, but unfortunately there are many problems in implementing it efficiently. A few existing programming languages also fall into the DSM class. Linda =-=[14]-=- supports a globally shared Tuple Space, which processes can access using a form of associative addressing. On distributed systems, Tuple Space can be replicated or partitioned, much as pages in SVM a... |
76 |
Slow Memory: Weakening Consistency to Enhance Concurrency on Distributed Shared Memories",
- Hutto, Ahamad
- 1990
(Show Context)
Citation Context ...wever, invalidating copies will be far less efficient than updating them. Several researchers have tried to solve this performance problem by relaxing the consistency constraints of the memory (e.g., =-=[37, 38]-=-). Although these weakly consistent memory models may have better performance, we fear that they also ruin the ease of programming for which DSM was designed in the first place. Since Orca is intended... |
65 |
Programming Language Concepts.
- Ghezzi, Jazayeri
- 1998
(Show Context)
Citation Context ...age features or adding compiler optimizations. In general, we prefer the latter option. We will discuss several examples of this design principle in the paper. Finally, the principle of orthogonality =-=[7]-=- is used with care, but it is not a design goal by itself.s- 3 - Another issue we have taken into account is that of debugging. As debugging of distributed programs is difficult, one needs all the hel... |
43 | Experience with Distributed Programming in Orca
- Bal, Kaashoek, et al.
- 1990
(Show Context)
Citation Context ... a sequential base language have many disadvantages, making them complicated for applications programmers to use. Since then, we have developed a new language for distributed programming, called Orca =-=[3, 4, 5]-=-. Orca is intended for distributed applications programming rather than systems programming, and is therefore designed to be a simple, expressive, and efficient language with clean semantics. Below, w... |
36 |
Low cost management of replicated data in fault-tolerant distributed systems
- Joseph, Birman
- 1986
(Show Context)
Citation Context ... all workers. based on replication and reliable broadcasting. We will briefly discuss a second implementation in Section 4.4. Replication of data is used in several fault-tolerant systems (e.g., ISIS =-=[25]-=-) to increase the availability of data in the presence of processor failures. Orca, in contrast, is not intended for fault-tolerant applications. In our implementation, replication is used to decrease... |
36 |
Message ordering in a multicast environment
- Garcia-Molina, Spauster
- 1989
(Show Context)
Citation Context ... sequencer and one broadcast packet from the sequencer to everyone. A comparison between our protocol and other well known protocols (e.g., those of Birman and Joseph [28], Garcia-Molina and Spauster =-=[29]-=-, and several others) is given in [30].s- 24 - 4.4. Comparison with an RPC-based Protocol Above, we have described one implementation of Orca, based on full replication of objects and on a distributed... |
34 | Replication Techniques for Speeding up Parallel Applications on Distributed Systems. Concurrency Practice and Experience - Bal, Kaashoek, et al. - 1992 |
27 |
Reducing host load, network load, and latency in a distributed shared memory
- Minnich, Farber
- 1990
(Show Context)
Citation Context ...wever, invalidating copies will be far less efficient than updating them. Several researchers have tried to solve this performance problem by relaxing the consistency constraints of the memory (e.g., =-=[37, 38]-=-). Although these weakly consistent memory models may have better performance, we fear that they also ruin the ease of programming for which DSM was designed in the first place. Since Orca is intended... |
27 | A Comparison of Two Paradigms for Distributed Shared Memory - Lelvet, Kaashoek, et al. - 1992 |
22 |
Programming Languages for
- Bal, Steiner, et al.
- 1989
(Show Context)
Citation Context ...by a discussion of hierarchically used objects. Finally, we will look at Orca’s data structures. 2.1. Distributed Shared Memory Most languages for distributed programming are based on message passing =-=[9]-=-. This choice seems obvious, since the underlying hardware already supports message passing. Still, there are many cases in which message passing is not the appropriate programming model. Message pass... |
20 |
Parallel Programming in a Virtual Object Space
- Lucco
- 1987
(Show Context)
Citation Context ...nd flexible: single operations are indivisible; sequences of operations are not. The model does not provide mutual exclusion at a granularity lower than the object level. Other languages (e.g., Sloop =-=[22]-=-) give programmers more accurate control over mutual exclusion synchronization. Our model does not support indivisible operations on a collection of objects. Operations on multiple objects require a d... |
20 | All Pairs Shortest Paths on a Hypercube Multiprocessor
- Jenq, Sahni
- 1987
(Show Context)
Citation Context ...problem (ASP). In this problem it is desired to find the length of the shortest path from any node i to any other node j in a given graph. The parallel algorithm we use is similar to the one given in =-=[33]-=-, which is a parallel version of Floyd’s algorithm. The distances between the nodes are represented in a matrix. Each processor computes part of the result matrix. The algorithm requires a nontrivial ... |
19 | Implementing Distributed Algorithms Using Remote
- Bal, van, et al.
- 1987
(Show Context)
Citation Context ...tarted out by implementing several coarse-grained parallel applications on top of the Amoeba system, using an existing sequential language extended with message passing for interprocess communication =-=[2]-=-. We felt that, for parallel applications, both the use of message passing and a sequential base language have many disadvantages, making them complicated for applications programmers to use. Since th... |
15 |
Replication Techniques for Speeding up
- Bal, Kaashoek, et al.
- 1989
(Show Context)
Citation Context ...here to replicate objects, how to synchronize write operations to replicated objects, and whether to update or invalidate copies after a write operation. We have looked at many alternative strategies =-=[26]-=-. The RTS described in this paper uses full replication of objects, updates replicas by applying write operations to all replicas, and implements mutual exclusion synchronization through a distributed... |
15 |
Experience with the Distributed Data Structure Paradigm
- Kaashoek, Bal, et al.
- 1989
(Show Context)
Citation Context ...the programmer. In essence, Tuple Space supports a fixed number of built-in operations that are executed indivisibly, but its support for building more complex indivisible operations is too low-level =-=[36]-=-. In Orca, on the other hand, programmers can define operations of arbitrary complexity on shared data structures; all these operations are executed indivisibly, so mutual exclusion synchronization is... |
14 | Reducing host load, network load, and latency in a distributed shared memory
- MINNICH, FARBER
- 1990
(Show Context)
Citation Context ...wever, invalidating copies will be far less efficient than updating them. Several researchers have tried to solve this performance problem by relaxing the consistency constraints of the memory (e.g., =-=[37, 38]-=-). Although these weakly consistent memory models may have better performance, we fear that they also ruin the ease of programming for which DSM was designed in the first place. Since Orca is intended... |
13 |
Architectural Support for Multilanguage Parallel Programming on Heterogeneous Systems
- Bisiani, Forin
- 1987
(Show Context)
Citation Context ...thms that would greatly benefit from support for shared data, even if no physical shared memory is available. Applications described in the literature include: a distributed speech recognition system =-=[10]-=-; linear equation solving, three-dimensional partial differential equations, and split-merge sort [11]; computer chess [12]; distributed system services (e.g., name service, time service), global sche... |
12 |
A Highly Parallel Chess Program
- Felten, Otto
- 1988
(Show Context)
Citation Context ...described in the literature include: a distributed speech recognition system [10]; linear equation solving, three-dimensional partial differential equations, and split-merge sort [11]; computer chess =-=[12]-=-; distributed system services (e.g., name service, time service), global scheduling, and replicated files [13]. So, the difficulty in providing (logically) shared data makes message passing a poor mat... |
12 |
Preliminary thoughts on problem-oriented shared memory: A decentralized approach to distributed systems. Operating Systems Review
- Cheriton
- 1985
(Show Context)
Citation Context ...ree-dimensional partial differential equations, and split-merge sort [11]; computer chess [12]; distributed system services (e.g., name service, time service), global scheduling, and replicated files =-=[13]-=-. So, the difficulty in providing (logically) shared data makes message passing a poor match for many applications. Several researchers have therefore worked on communication models based on logically... |
7 |
Programming Distributed Systems, Silicon
- Bal
- 1990
(Show Context)
Citation Context ... a sequential base language have many disadvantages, making them complicated for applications programmers to use. Since then, we have developed a new language for distributed programming, called Orca =-=[3, 4, 5]-=-. Orca is intended for distributed applications programming rather than systems programming, and is therefore designed to be a simple, expressive, and efficient language with clean semantics. Below, w... |
5 |
Preserving Abstraction in Concurrent Programming
- Cooper, Hamilton
- 1988
(Show Context)
Citation Context ...oking at the implementation of an operation to see how it may be used. Cooper and Hamilton have observed similar conflicts between parallel programming and data abstraction in the context of monitors =-=[23]-=-. They propose extending operation specifications with information about their implementation, such as whether or not the operation suspends or has any side effects. We feel it is not very elegant to ... |
2 | Synchronization and scheduling in ALPS objects - Vishnubhotia - 1988 |
1 |
An Evaluation of the SR Language Design,’’ report IR-219, Vrije Universiteit
- Bal
- 1990
(Show Context)
Citation Context ...rs and guardians are completely different. SR [17] provides a single mechanism (resources), but the overhead of operations on resources is far too high to be useful for sequential abstract data types =-=[18]-=-. The fact that shared data are accessed through user-defined operations is an important distinction between our model and other models. Shared virtual memory, for example, simulates physical shared m... |
1 |
Synchronization and Scheduling
- Vishnubhotia
- 1988
(Show Context)
Citation Context ...es), Linda’s Tuple Space, and Shared Virtual Memory.sObjects - 28 - Objects are used in many object-based languages for parallel or distributed programming, such as Emerald [15], Amber [34], and ALPS =-=[35]-=-. Objects in such languages typically have two parts: 1. Encapsulated data. 2. A manager process that controls access to the data. The data are accessed by sending a message to the manager process, as... |
1 | An Evaluation of the SR Language Design," report IR-219, Vrije Universiteit - Bal - 1990 |