Results 1  10
of
12
Embedding as a tool for Language Comparison
, 1994
"... This paper addresses the problem of defining a formal tool to compare the expressive power of different concurrent constraint languages. We refine the notion of embedding by adding some "reasonable" conditions, suitable for concurrent frameworks. The new notion, called modular embedding, i ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
This paper addresses the problem of defining a formal tool to compare the expressive power of different concurrent constraint languages. We refine the notion of embedding by adding some "reasonable" conditions, suitable for concurrent frameworks. The new notion, called modular embedding, is used to define a preorder among these languages, representing different degrees of expressiveness. We show that this preorder is not trivial (i.e. it does not collapse into one equivalence class) by proving that Flat CP cannot be embedded into Flat GHC, and that Flat GHC cannot be embedded into a language without communication primitives in the guards, while the converses hold. 4 A; C; D; G; M;O;P;R; T : In calligraphic style. ss; ff ; dd: In slanted style. \Sigma; \Gamma; #; oe; ; /; ø; ff. S ; [; "; ;; 2 j=; 6j=; ; 9 +; k; ~ +; ~ k; ! \Gamma! W ; \Gamma! ; ; \Gamma! W ; \Gamma! ; h; i; [[; ]]; d; e ffi; ?; ; 5 All reasonable programming languages are equivalent, since they are Turing...
Towards a Hierarchy of Negative Test Operators for Generative Communication
, 1998
"... Generative communication is a coordination paradigm that permits interprocess communication via the introduction and consumption of data to and from a shared common data space. We call negative test operators those coordination primitives able to test the absence of data in the common data space. In ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Generative communication is a coordination paradigm that permits interprocess communication via the introduction and consumption of data to and from a shared common data space. We call negative test operators those coordination primitives able to test the absence of data in the common data space. In this paper we investigate the expressive power of this family of operators. To this aim, we concentrate on three possible primitives differing in the ability of instantaneously producing new data after the test: tfa(a) tests the absence of data of kind a, t&e(a) instantaneously produces a new occurrence of datum a after having tested that no other occurrences are available, t&p(a; b) atomically tests the absence of data a and produces one instance of datum b. We prove the existence of a strict hierarchy of expressiveness among these operators. 1 Introduction Many coordination languages allow interprocess communication via a shared data space sometimes called Tuple Space as in Linda [12], C...
From Concurrent Logic Programming to Concurrent Constraint Programming
 Programming, in: Advances in Logic Programming Theory
, 1993
"... The endeavor to extend logic programming to a language suitable for concurrent systems has stimulated in the last decade an intensive research, resulting in a large variety of proposals. A common feature of the various approaches is the attempt to define mechanisms for concurrency within the logical ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
The endeavor to extend logic programming to a language suitable for concurrent systems has stimulated in the last decade an intensive research, resulting in a large variety of proposals. A common feature of the various approaches is the attempt to define mechanisms for concurrency within the logical paradigm, the driving ideal being the balance between expressiveness and declarative reading. In this survey we present the motivations, the principal lines along which the field has developed, the various paradigms which have been proposed, and the main approaches to the semantic foundations. 1 Introduction Among the various reasons which have contributed to the popularity of logic programming, one is the opinion that it is an inherently parallel language, therefore suitable for parallel and distributed architectures. The pure language can already be regarded as a model for parallel computation: in the socalled process interpretation (van Emden and de Lucena 1982; Shapiro 1983), the goal...
Symmetric and Asymmetric Asynchronous Interaction
 ICE 2008
, 2008
"... We investigate classes of systems based on different interaction patterns with the aim of achieving distributability. As our system model we use Petri nets. In Petri nets, an inherent concept of simultaneity is built in, since when a transition has more than one preplace, it can be crucial that toke ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We investigate classes of systems based on different interaction patterns with the aim of achieving distributability. As our system model we use Petri nets. In Petri nets, an inherent concept of simultaneity is built in, since when a transition has more than one preplace, it can be crucial that tokens are removed instantaneously. When modelling a system which is intended to be implemented in a distributed way by a Petri net, this builtin concept of synchronous interaction may be problematic. To investigate the problem we assume that removing tokens from places can no longer be considered as instantaneous. We model this by inserting silent (unobservable) transitions between transitions and their preplaces. We investigate three different patterns for modelling this type of asynchronous interaction. Full asynchrony assumes that every removal of a token from a place is time consuming. For symmetric asynchrony, tokens are only removed slowly in case of backward branched transitions, hence where the concept of simultaneous removal actually occurs. Finally we consider a more intricate pattern by allowing to remove tokens from preplaces of backward branched transitions asynchronously in sequence (asymmetric asynchrony). We investigate the effect of these different transformations of instantaneous interaction into asynchronous interaction patterns by comparing the behaviours of nets before and after insertion of the silent transitions. We exhibit for which classes of Petri nets we obtain equivalent behaviour with respect to failures equivalence. It turns out that the resulting hierarchy of Petri net classes can be described by semistructural properties. In case of full asynchrony and symmetric asynchrony, we obtain precise characterisations; for asymmetric asynchrony we obtain lower and upper bounds. We briefly comment on possible applications of our results to Message Sequence Charts.
G.: A process algebraic view of shared dataspace coordination
 J. Log. Algebr. Program
, 2008
"... Coordination languages were introduced in the early 80’s as programming notations to manage the interaction among concurrent collaborating software entities. Process algebras have been successfully exploited for the formal definition of the semantics of these languages and as a framework for the com ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Coordination languages were introduced in the early 80’s as programming notations to manage the interaction among concurrent collaborating software entities. Process algebras have been successfully exploited for the formal definition of the semantics of these languages and as a framework for the comparison of their expressive power. We provide an incremental and uniform presentation of a collection of process calculi featuring coordination primitives for the shared dataspace coordination model (inspired by Linda, JavaSpaces, TSpaces, and the like). On the one hand, the incremental presentation of the various calculi permits to reason about specific linguistic constructs of coordination languages. On the other hand, the uniform presentation of a family of related calculi allows us to obtain an overview of the main results achieved in the literature on different (and unrelated) calculi. Key words: process calculi, coordination models and languages, tuple spaces, event notification, transactions 1
On the Structural Simplicity of Machines and Languages
, 1993
"... We employ an algebraic method for comparing the structural simplicity of families of abstract machines. We associate with each family of machines a firstorder structure that includes machines as objects, composition operations, which construct larger machines from smaller ones, as functions, and a ..."
Abstract
 Add to MetaCart
We employ an algebraic method for comparing the structural simplicity of families of abstract machines. We associate with each family of machines a firstorder structure that includes machines as objects, composition operations, which construct larger machines from smaller ones, as functions, and a set of semantic relations. We then compare families of machines by studying the existence of homomorphisms between the associated structures. Given families of machines L, L 0 with associated structures S, S 0 , we say that L is simpler than L 0 if there is a homomorphism of S into S 0 , but not vice versa. We show that across several abstract machine models  finite automata, Turing machines, and logic programs  deterministic machines are simpler than nondeterministic machines and nondeterministic machines are simpler than alternating machines. Our results cross computational complexity boundaries. We show that for Turing machines, finite automata and logic programs every non...
ON CHARACTERISING DISTRIBUTABILITY ∗
, 2012
"... Vol. 9(3:17)2013, pp. 1–34 www.lmcsonline.org ..."
On Synchronous and Asynchronous Interaction in Distributed Systems
"... When considering distributed systems, it is a central issue how to deal with interactions between components. In this paper, we investigate the paradigms of synchronous and asynchronous interaction in the context of distributed systems. We investigate to what extent or under which conditions synch ..."
Abstract
 Add to MetaCart
When considering distributed systems, it is a central issue how to deal with interactions between components. In this paper, we investigate the paradigms of synchronous and asynchronous interaction in the context of distributed systems. We investigate to what extent or under which conditions synchronous interaction is a valid concept for specification and implementation of such systems. We choose Petri nets as our system model and consider different notions of distribution by associating locations to elements of nets. First, we investigate the concept of simultaneity which is inherent in the semantics of Petri nets when transitions have multiple input places. We assume that tokens may only be taken instantaneously by transitions on the same location. We exhibit a hierarchy of ‘asynchronous ’ Petri net classes by different assumptions on possible distributions. Alternatively, we assume that the synchronisations specified in a Petri net are crucial system properties. Hence transitions and their preplaces may no longer placed on separate locations. We then answer the question which systems may be implemented in a distributed way without restricting concurrency, assuming that locations are inherently sequential. It turns out that in both settings we find semistructural properties of Petri nets describing exactly the problematic situations for interactions in distributed systems.