Results 1 - 10
of
114
Object Structure in the Emerald System
- OOPSLA'S6 Conference Proceedings, 29 September
, 1986
"... Emerald is an object.based language for the construction of distributed applications. The principal features of Emerald lnehtde a uniform object model appropriate for programming both private local objects and shared remote objects, and a type system that permits multiple user.defined and compiler-d ..."
Abstract
-
Cited by 167 (12 self)
- Add to MetaCart
(Show Context)
Emerald is an object.based language for the construction of distributed applications. The principal features of Emerald lnehtde a uniform object model appropriate for programming both private local objects and shared remote objects, and a type system that permits multiple user.defined and compiler-defined implementations. Emerald objects are fully mobile and can move from node to node within the network, even during an invocation. This paper discusses the structure, programming, and inq~lementation of Emerald objects, and Emerald's use of abstract types. 1.
A semantics for concurrent separation logic
- Theoretical Computer Science
, 2004
"... Abstract. We present a denotational semantics based on action traces, for parallel programs which share mutable data and synchronize using resources and conditional critical regions. We introduce a resource-sensitive logic for partial correctness, adapting separation logic to the concurrent setting, ..."
Abstract
-
Cited by 108 (1 self)
- Add to MetaCart
(Show Context)
Abstract. We present a denotational semantics based on action traces, for parallel programs which share mutable data and synchronize using resources and conditional critical regions. We introduce a resource-sensitive logic for partial correctness, adapting separation logic to the concurrent setting, as proposed by O’Hearn. The logic allows program proofs in which “ownership ” of a piece of state is deemed to transfer dynamically between processes and resources. We prove soundness of this logic, using a novel “local ” interpretation of traces, and we show that every provable program is race-free. 1
Guava: A Dialect of Java without Data Races
- In Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA
, 2000
"... We introduce Guava, a dialect of Java whose rules statically guarantee that parallel threads access shared data only through synchronized methods. Our dialect distinguishes three categories of classes: (1) monitors, which may be referenced from multiple threads, but whose methods are accessed serial ..."
Abstract
-
Cited by 64 (4 self)
- Add to MetaCart
(Show Context)
We introduce Guava, a dialect of Java whose rules statically guarantee that parallel threads access shared data only through synchronized methods. Our dialect distinguishes three categories of classes: (1) monitors, which may be referenced from multiple threads, but whose methods are accessed serially; (2) values, which cannot be referenced and therefore are never shared; and (3) objects, which can have multiple references but only from within one thread, and therefore do not need to be synchronized. Guava circumvents the problems associated with today's Java memory model, which must define behavior when concurrent threads access shared memory without synchronization.
Distributed Processes: A Concurrent Programming Concept
- Communications of the ACM
, 1978
"... A language concept for concurrent processes without common variables is introduced. These processes communicate and synchronize by means of procedure calls and guarded regions. This concept is proposed for real-time applications controlled by microcomputer networks with distributed storage. The pap ..."
Abstract
-
Cited by 63 (0 self)
- Add to MetaCart
(Show Context)
A language concept for concurrent processes without common variables is introduced. These processes communicate and synchronize by means of procedure calls and guarded regions. This concept is proposed for real-time applications controlled by microcomputer networks with distributed storage. The paper gives several examples of distributed processes and shows that they include procedures, coroutines, classes, monitors, processes, semaphores, buffers, path expressions, and input/output as special cases.
Java’s insecure parallelism
- ACM SIGPLAN Notices
, 1999
"... Abstract: The author examines the synchronization features of Java and finds that they are insecure variants of his earliest ideas in parallel programming published in 1972-73. The claim that Java supports monitors is shown to be false. The author concludes that Java ignores the last twenty-five yea ..."
Abstract
-
Cited by 51 (0 self)
- Add to MetaCart
Abstract: The author examines the synchronization features of Java and finds that they are insecure variants of his earliest ideas in parallel programming published in 1972-73. The claim that Java supports monitors is shown to be false. The author concludes that Java ignores the last twenty-five years of research in parallel programming languages. Zeywords: programming languages; parallel programming; monitors; security; Java; We must expect posterity to view with some asperity the marvels and the wonders we're passing on to it; but it should change its attitude to one of heartfelt gratitude when thinking of the blunders we didn't quite commit. 1. PLATFORM-INDEPENDENT PARALLEL PROGRAMMING Pier Hein (1966) Java has resurrected the well-known idea of platform-independent parallel program-ming. In this paper I examine the synchronization features of Java to discover their origin and determine if they live up to the standards set by the invention of monitors and Concurrent Pascal a quarter of a century ago. In the 1970s my students and I demonstrated that it is possible to write nontriv-ial parallel programs exclusively in a secure language that supports monitors. The milestones of this work were: • The idea of associating explicit queues with monitors [Brinch Hansen 1972]. • A class notation for monitors [Brinch Hansen 1973]. • A monitor language, Concurrent Pascal [Brinch Hansen 1975a]. • A portable compiler that generated platform-independent parallel code [Hart-mann 1975]. • A portable interpreter that ran platform-independent parallel code on a wide variety of computers [Brinch Hansen 1975b].
Experience with distributed programming in Orca
- in Proc. IEEE CS International Conference on Computer Languages
, 1990
"... Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, data types and create instances (objects) of these types, which may be shared among processes. All op ..."
Abstract
-
Cited by 43 (10 self)
- Add to MetaCart
Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, data types and create instances (objects) of these types, which may be shared among processes. All operations on shared objects are executed atomically. Orca’s shared objects are implemented by replicating them in the local memories of the proces-sors. Read operations use the local copies of the object, without doing any interprocess communication. Write operations update all copies using an efficient reliable broadcast protocol. In this paper, we briefly describe the language and its implementation and then report on our ex-periences in using Orca for three parallel applications: the Traveling Salesman Problem, the All-pairs Shortest Paths problem, and Successive Overrelaxation. These applications have different needs for shared data: TSP greatly benefits from the support for shared data; ASP benefits from the use of broad-cast communication, even though it is hidden in the implementation; SOR merely requires point-to-point communication, but still can be implemented in the language by simulating message passing.
Location Consistency - a New Memory Model and Cache Consistency Protocol
- IEEE Transactions on Computers
, 1998
"... Existing memory models and cache consistency protocols assume the memory coherence property which requires that all processors observe the same ordering of write operations to the same location. In this paper, we address the problem of defining a memory model that does not rely on the memory cohere ..."
Abstract
-
Cited by 39 (5 self)
- Add to MetaCart
Existing memory models and cache consistency protocols assume the memory coherence property which requires that all processors observe the same ordering of write operations to the same location. In this paper, we address the problem of defining a memory model that does not rely on the memory coherence assumption, and also the problem of designing a cache consistency protocol based on such a memory model. We define a new memory consistency model, called Location Consistency (LC), in which the state of a memory location is modeled as a partially ordered multiset (pomset) of write and synchronization operations. We prove that LC is strictly weaker than existing memory models, but is still equivalent to stronger models for parallel programs that have no data races. We also introduce a new multiprocessor cache consistency protocol based on the LC memory model. We prove that this LC protocol obeys the LC memory model. The LC protocol does not need to enforce single write ownership of memory...
Orca: A Language For Distributed Programming
- ACM SIGPLAN NOTICES
, 1990
"... We present a simple model of shared data-objects, which extends the abstract data type model to support distributed programming. Our model essentially provides shared address space semantics, rather than message passing semantics, without requiring physical shared memory to be present in the tar ..."
Abstract
-
Cited by 38 (0 self)
- Add to MetaCart
We present a simple model of shared data-objects, which extends the abstract data type model to support distributed programming. Our model essentially provides shared address space semantics, rather than message passing semantics, without requiring physical shared memory to be present in the target system. We also propose a new programming language, Orca, based on shared dataobjects. A compiler and three different run time systems for Orca exist, which have been in use for over a year now.