Results 11  20
of
43
Branching vs. linear time – semantical perspective
 In Proc. 5th Int’l Symp. on ATVA, LNCS 4762
"... Abstract. The discussion in the computerscience literature of the relative merits of linear versus branchingtime frameworks goes back to early 1980s. One of the beliefs dominating this discussion has been that the lineartime framework is not expressive enough semantically, making lineartime log ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The discussion in the computerscience literature of the relative merits of linear versus branchingtime frameworks goes back to early 1980s. One of the beliefs dominating this discussion has been that the lineartime framework is not expressive enough semantically, making lineartime logics lacking in expressiveness. In this work we examine the branchinglinear issue from the perspective of process equivalence, which is one of the most fundamental notions in concurrency theory, as defining a notion of process equivalence essentially amounts to defining semantics for processes. Over the last three decades numerous notions of process equivalence have been proposed. Researchers in this area do not anymore try to identify the “right ” notion of equivalence. Rather, focus has shifted to providing taxonomic frameworks, such as “the linearbranching spectrum”, for the many proposed notions and trying to determine suitability for different applications. We revisit this issue here from a fresh perspective. We postulate three principles that we view as fundamental to any discussion of process equivalence. First, we borrow from research in denotational semantics and take contextual equivalence as the primary notion of equivalence. This eliminates many testing scenarios as either too strong or too weak. Second, we require the description of a process to fully specify all relevant behavioral aspects of the process. Finally, we require observable process behavior to be reflected in its input/output behavior. Under these postulates the distinctions between the linear and branching semantics tend to evaporate. As an example, we apply these principles to the framework of transducers, a classical notion of statebased processes that dates back to the 1950s and is well suited to hardware modeling. We show that our postulates result in a unique notion of process equivalence, which is trace based, rather than tree based. 1
Modeling Indirect Interaction in Open Computational Systems
 Proc. Theory and Practice of Open Computational Systems (TAPOCS
, 2003
"... Open systems are part of a paradigm shift from algorithmic to interactive computation. Multiagent systems in nature that exhibit emergent behavior and stigmergy offer inspiration for research in open systems and enabling technologies for collaboration. This contribution distinguishes two types of in ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Open systems are part of a paradigm shift from algorithmic to interactive computation. Multiagent systems in nature that exhibit emergent behavior and stigmergy offer inspiration for research in open systems and enabling technologies for collaboration. This contribution distinguishes two types of interaction, directly via messages, and indirectly via persistent observable state changes. Models of collaboration are incomplete if they fail to explicitly represent indirect interaction; a richer set of system behaviors is possible when computational entities interact indirectly, including via analog media, such as the real world, than when interaction is exclusively direct. Indirect interaction is therefore a precondition for certain emergent behaviors.
Notes on PAlgebra (2): Group Presentation of Process Structure
, 1992
"... We give yet another presentation of the process structures introduced in [1] and prove its equivalence to the original one. The presentation enjoys several convenient features not found in the original presentation, though in compensation for a certain loss of simplicity. Among others, it dispens ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
We give yet another presentation of the process structures introduced in [1] and prove its equivalence to the original one. The presentation enjoys several convenient features not found in the original presentation, though in compensation for a certain loss of simplicity. Among others, it dispenses with the use of a choice function for quotient construction. The new presentation offers a different viewpoint on the theory of process structure, and may provide a useful tool for some purposes. 1 Introduction The present notes show that the process structures introduced in [1] can be given yet another presentation. Since a process structure is intended to provide a basic stratum of general process theory by giving a mathematical distillation of a basic element of concurrent processes (cf. [3]), its alternative presentation is interesting, because it offers a new way to look at the theory, and especially because different presentations are useful for different purposes. The presenta...
Axioms for Definability and Full Completeness
 in Proof, Language and Interaction: Essays in Honour of Robin
, 2000
"... ion problem for PCF (see [BCL86, Cur93, Ong95] for surveys). The importance of full abstraction for the semantics of programming languages is that it is one of the few quality filters we have. Specifically, it provides a clear criterion for assessing how definitive a semantic analysis of some langu ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
ion problem for PCF (see [BCL86, Cur93, Ong95] for surveys). The importance of full abstraction for the semantics of programming languages is that it is one of the few quality filters we have. Specifically, it provides a clear criterion for assessing how definitive a semantic analysis of some language is. It must be admitted that to date the quest for fully abstract models has not yielded many obvious applications; but it has generated much of the deepest work in semantics. Perhaps it is early days yet. Recently, game semantics has been used to give the first syntaxindependent constructions of fully abstract models for a number of programming languages, including PCF [AJM96, HO96, Nic94], richer functional languages [AM95, McC96b, McC96a, HY97], and languages with nonfunctional features such as reference types and nonlocal control constructs [AM97c, AM97b, AM97a, Lai97]. A noteworthy feature is that the key definability results for the richer languages are proved by a reduction to...
Total correctness refinement for sequential reactive systems
 In proceedings of TPHOLs 2000. (13th International Conference on Theorem Proving in Higher Order Logics), number 1869 in LNCS
, 2000
"... Abstract. We introduce a coinductivelydefined refinement relation on sequential nondeterministic reactive systems that guarantees total correctness. It allows the more refined system to both have less nondeterminism in its outputs and to accept more inputs than the less refined system. Data reifi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a coinductivelydefined refinement relation on sequential nondeterministic reactive systems that guarantees total correctness. It allows the more refined system to both have less nondeterminism in its outputs and to accept more inputs than the less refined system. Data reification in VDM is a special case of this refinement. Systems are considered at what we have called fine and medium levels of granularity. At the finegrain level, a system’s internal computational steps are described. The finegrain level abstracts to a mediumgrain level where only input/output and termination behaviour is described. The refinement relation applies to medium grain systems. The main technical result of the paper is the proof that refinement is respected by contexts constructed from fine grain systems. In other words, we show that refinement is a precongruence. The development has been mechanized in PVS to support its use in case studies. 1
SessionBased Compilation Framework for Multicore Programming
"... This paper outlines a general picture of our ongoing work under EU Mobius and Sensoria projects on a typebased compilation and execution framework for a class of multicore CPUs. Our focus is to harness the power of concurrency and asynchrony in one of the major forms of multicore CPUs based on dist ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper outlines a general picture of our ongoing work under EU Mobius and Sensoria projects on a typebased compilation and execution framework for a class of multicore CPUs. Our focus is to harness the power of concurrency and asynchrony in one of the major forms of multicore CPUs based on distributed, noncoherent memory, through the use of typedirected compilation. The key idea is to regard explicit asynchronous data transfer among local caches as typed communication among processes. By typing imperative processes with a variant of session types, we obtain both typesafe and efficient compilation into processes distributed over multiple cores with local memories.
Structure and Behaviour in Hardware Verification
 Higher Order Logic Theorem Proving and its applications, 6th International Workshop, HUG ’93, Vancouver, B.C. Canada, number 780 in Lecture
, 1993
"... In this paper we review how hardware has been described in the formal hardware verification community. Recent developments in hardware description are evaluated against the background of the use of hardware description languages, and also in relation to programming languages. The notions of structur ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we review how hardware has been described in the formal hardware verification community. Recent developments in hardware description are evaluated against the background of the use of hardware description languages, and also in relation to programming languages. The notions of structure and behaviour are crucial to this discussion. 1 Introduction Hardware has long been described using hardware description languages (hdls). More recently, in the field of hardware verification logicbased notations have been used. In this paper we explore how the relationship between the structure and behaviour of circuits has been perceived over time in the formal verification field. The structure of this paper is as follows: we give our view of hdls and simulation prior to the advent of formal methods, then we comment on formal logic methods used to describe and reason about hardware. Connections with conventional programming languages are also explored. Hardware Description Languages an...
Processes and Games
, 2003
"... A general theory of computing is important, if we wish to have a common mathematical footing based on which diverse scienti c and engineering eorts in computing are uniformly understood and integrated. A quest for such a general theory may take dierent paths. As a case for one of the possible paths ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A general theory of computing is important, if we wish to have a common mathematical footing based on which diverse scienti c and engineering eorts in computing are uniformly understood and integrated. A quest for such a general theory may take dierent paths. As a case for one of the possible paths towards a general theory, this paper establishes a precise connection between a gamebased model of sequential functions by Hyland and Ong on the one hand, and a typed version of the calculus on the other. This connection has been instrumental in our recent eorts to use the calculus as a basic mathematical tool for representing diverse classes of behaviours, even though the exact form of the correspondence has not been presented in a published form. By redeeming this correspondence we try to make explicit a convergence of ideas and structures between two distinct threads of Theoretical Computer Science. This convergence indicates a methodology for organising our understanding on computation and that methodology, we argue, suggests one of the promising paths to a general theory.
Designing and understanding the behaviour of systems
, 2007
"... Robin Milner observed in 1973 that the primary task of computers appeared to be interacting with their environment, yet the theory of programs and programming at that time seemed to ignore this fact completely [36, 37]. As a consequence, he set out working on his seminal book [38, 40] in which he de ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Robin Milner observed in 1973 that the primary task of computers appeared to be interacting with their environment, yet the theory of programs and programming at that time seemed to ignore this fact completely [36, 37]. As a consequence, he set out working on his seminal book [38, 40] in which he developed the CCS, the Calculus of Communicating Systems. At the same time two other main process algebras were developed, namely ACP (Algebra of Communicating Processes, [5]) and CSP (Communicating Sequential Processes, [27, 28]). Interesting as they were, these process algebras were too bare to be used for the description of actual systems, mainly because they lacked a proper integration of data. In order to solve this, process algebraic specification languages have been designed (most notably LOTOS [29] and PSF [35]) which contained both data and processes. A problem with these languages was that they were too complex to act as a basic carrier for the development of behavioural analysis techniques. We designed an intermediate language, namely mCRL2 (and its direct predecessor µCRL [21, 19]) as a stripped down process specification language or an extended process algebra. It contains exactly those ingredients needed for a complete behavioural specification, and its (relative) simplicity allows to concentrate on proof and analysis techniques for process behaviour. Throughout the years many of these techniques have been developed. To mention a few: the
Traces for Coalgebraic Components
 MATH. STRUCT. IN COMP. SCIENCE
, 2010
"... This paper contributes a feedback operator, in the form of a monoidal trace, to the theory of coalgebraic, statebased modelling of components. The feedback operator on components is shown to satisfy the trace axioms of Joyal, Street and Verity. We employ McCurdy’s tube diagrams, an extension of sta ..."
Abstract
 Add to MetaCart
This paper contributes a feedback operator, in the form of a monoidal trace, to the theory of coalgebraic, statebased modelling of components. The feedback operator on components is shown to satisfy the trace axioms of Joyal, Street and Verity. We employ McCurdy’s tube diagrams, an extension of standard string diagrams for monoidal categories, for representing and manipulating component diagrams. The microcosm principle then yields a canonical “inner” traced monoidal structure on the category of resumptions (elements of final coalgebras / components). This generalises an observation by Abramsky, Haghverdi and Scott.