Results 1  10
of
11
Proving congruence of bisimulation in functional programming languages
 Information and Computation
, 1996
"... Email: howe research.att.com We give a method for proving congruence of bisimulationlike equivalences in functional programming languages. The method applies to languages that can be presented as a set of expressions together with an evaluation relation. We use this method to show that some genera ..."
Abstract

Cited by 109 (1 self)
 Add to MetaCart
Email: howe research.att.com We give a method for proving congruence of bisimulationlike equivalences in functional programming languages. The method applies to languages that can be presented as a set of expressions together with an evaluation relation. We use this method to show that some generalizations of Abramsky's applicative bisimulation are congruences whenever evaluation can be specified by a certain natural form of structured operational semantics. One of the generalizations handles nondeterminism and diverging computations.] 1996 Academic Press, Inc. 1.
Bisimulation for higherorder process calculi
 INFORMATION AND COMPUTATION
, 1996
"... A higherorder process calculus is a calculus for communicating systems which contains higherorder constructs like communication of terms. We analyse the notion of bisimulation in these calculi. We argue that both the standard definition of bisimulation (i.e., the one for CCS and related calculi), ..."
Abstract

Cited by 62 (5 self)
 Add to MetaCart
A higherorder process calculus is a calculus for communicating systems which contains higherorder constructs like communication of terms. We analyse the notion of bisimulation in these calculi. We argue that both the standard definition of bisimulation (i.e., the one for CCS and related calculi), as well as higherorder bisimulation [E. Astesiano,
A Logic for Probabilities in Semantics
, 2003
"... Probabilistic computation has proven to be a challenging and interesting area of research, both from the theoretical perspective of denotational semantics and the practical perspective of reasoning about probabilistic algorithms. On the theoretical side, the probabilistic powerdomain of Jones and Pl ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Probabilistic computation has proven to be a challenging and interesting area of research, both from the theoretical perspective of denotational semantics and the practical perspective of reasoning about probabilistic algorithms. On the theoretical side, the probabilistic powerdomain of Jones and Plotkin represents a significant advance. Further work, especially by AlvarezManilla, has greatly improved our understanding of the probabilistic powerdomain, and has helped clarify its relation to classical measure and integration theory. On the practical side, many researchers such as Kozen, Segala, Desharnais, and Kwiatkowska, among others, study problems of verification for probabilistic computation by defining various suitable logics for the classes of processes under study. The work reported here begins to bridge the gap between the domain theoretic and verification (model checking) perspectives on probabilistic computation by exhibiting sound and complete logics for probabilistic powerdomains that arise directly from given logics for the underlying domains. The category in which the construction is carried out generalizes Scott’s Information Systems by taking account of full classical sequents. Via Stone duality, following Abramsky’s Domain Theory in Logical Form, all known interesting categories of domains are embedded as subcategories. So the results reported here properly generalize similar constructions on specific categories of domains. The category offers a promising universe of semantic domains characterized by a very rich structure and good preservation properties of standard constructions. Furthermore, because the logical constructions make use of full classical sequents, the morphisms have a natural nondeterministic interpretation. Thus the category is a natural one in which to investigate the relationship between probabilistic and nondeterministic computation. We discuss the problem of integrating probabilistic and nondeterministic computation after presenting the construction of logics for probabilistic powerdomains.
A Full Formalisation of πCalculus Theory in the Calculus of Constructions
, 1997
"... A formalisation of picalculus in the Coq system is presented. Based on a de Bruijn notation for names, our... ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
A formalisation of picalculus in the Coq system is presented. Based on a de Bruijn notation for names, our...
Coinductive Characterizations of Applicative Structures
 MATH. STRUCTURES IN COMP. SCI. 9(4):403–435
, 1998
"... We discuss new ways of characterizing, as maximal fixed points of monotone operators, observational congruences on terms and, more in general, equivalences on applicative structures. These characterizations naturally induce new forms of coinduction principles, for reasoning on program equivalences, ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We discuss new ways of characterizing, as maximal fixed points of monotone operators, observational congruences on terms and, more in general, equivalences on applicative structures. These characterizations naturally induce new forms of coinduction principles, for reasoning on program equivalences, which are not based on Abramsky's applicative bisimulation. We discuss in particular, what we call, the cartesian coinduction principle, which arises when we exploit the elementary observation that functional behaviours can be expressed as cartesian graphs. Using the paradigm of final semantics, the soundness of this principle over an applicative structure can be expressed easily by saying that the applicative structure can be construed as a strongly extensional coalgebra for the functor (P( \Theta )) \Phi (P( \Theta )). In this paper, we present two general methods for showing the soundenss of this principle. The first applies to approximable applicative structures. Many c.p.o. models in...
Making a Productive Use of Failure to Generate Witnesses for Coinduction from Divergent Proof Attempts
 RR0004 in the Informatics Report Series
, 2000
"... this paper. Corresponding Author. 2 Witnesses for Coinduction witness relation is a fundamental step in the process of proof by coinduction. These techniques are based on middle{out reasoning (delaying the choice of witness for as long as possible by using meta{variables and higher order unicati ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
this paper. Corresponding Author. 2 Witnesses for Coinduction witness relation is a fundamental step in the process of proof by coinduction. These techniques are based on middle{out reasoning (delaying the choice of witness for as long as possible by using meta{variables and higher order unication) and proof critics (exploiting information from failed proof attempts to modify witnesses). Coinduction is the dual of induction and is used to deal naturally with innite processes. It was rst investigated seriously in the eld of concurrency [25] where looping communication networks are commonplace. It is also used in so{ called \lazy" functional languages where the evaluation procedure only evaluates functions when they are required and may not fully evaluate 1 them. In this way a potentially innite process may be present in a program without forcing the entire program to be non{terminating. The semantics of lazy languages are generally expressed in an operational style. This work concentrates on the use of coinduction with the operational semantics of a lazy functional language. Coinduction has also been proposed for use with object{oriented languages [20], cryptographic protocols [1] and the calculus of mobile ambients [21]. Tools have been provided for coinduction in several theorem proving environments. One of these, the Edinburgh Concurrency Workbench [12], is fully automated. This deals with problems described in Process Algebras. In other areas, such as functional languages, automation has not been attempted. The choice of the bisimulation needed by a proof is equivalent to the choice of induction scheme in inductive proofs [15]. Like the choice of induction scheme, the choice of bisimulation is a hard step in coinductive proof. This work presents an auto...
Communication Errors in the πCalculus are Undecidable
"... We present an undecidability proof of the notion of communication errors in the polyadic #calculus. The demonstration follows a general pattern of undecidability proofs  reducing a wellknown undecidable problem to the problem in question. We make use of an encoding of the #calculus into the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present an undecidability proof of the notion of communication errors in the polyadic #calculus. The demonstration follows a general pattern of undecidability proofs  reducing a wellknown undecidable problem to the problem in question. We make use of an encoding of the #calculus into the #calculus to show that the decidability of communication errors would solve the problem of deciding whether a lambda term has a normal form.
An OutputBased Semantics of Λµ with Explicit Substitution
 in the πcalculus. IFIPTCS’12, LNCS 7604
, 2012
"... We study the Λµcalculus, extended with explicit substitution, and define a compositional outputbased translation into a variant of the πcalculus with pairing. We show that this translation preserves singlestep explicit head reduction with respect to contextual equivalence. We use this result to ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We study the Λµcalculus, extended with explicit substitution, and define a compositional outputbased translation into a variant of the πcalculus with pairing. We show that this translation preserves singlestep explicit head reduction with respect to contextual equivalence. We use this result to show operational soundness for head reduction, adequacy, and operational completeness. Using a notion of implicative typecontext assignment for the πcalculus, we also show that assignable types are preserved by the translation. We finish by showing that termination is preserved.
Witnesses for Coinduction from Divergent Proof Attempts
"... Abstract: Coinduction is a proof rule. It is the dual of induction. It allows reasoning about non–well–founded structures such as lazy lists or streams and is of particular use for reasoning about equivalences. A central difficulty in the automation of coinductive proof is the choice of a relation ( ..."
Abstract
 Add to MetaCart
Abstract: Coinduction is a proof rule. It is the dual of induction. It allows reasoning about non–well–founded structures such as lazy lists or streams and is of particular use for reasoning about equivalences. A central difficulty in the automation of coinductive proof is the choice of a relation (called a bisimulation). We present an automation of coinductive theorem proving. This automation is based on the idea of proof planning. Proof planning constructs the higher level steps in a proof, using knowledge of the general structure of a family of proofs and exploiting this knowledge to control the proof search. Part of proof planning involves the use of failure information to modify the plan by the use of a proof critic which exploits the information gained from the failed proof attempt. Our approach to the problem was to develop a strategy that makes an initial simple guess at a bisimulation and then uses generalisation techniques, motivated by a critic, to refine this guess, so that a larger class of coinductive problems can be automatically verified.The implementation of this strategy has focused on the use of coinduction to prove the equivalence of programs in a small lazy functional language which is similar to Haskell. We have developed a proof plan for coinduction and a critic associated with this proof plan. These have been implemented in CoCLAM, an extended version of CLAM with encouraging results. The planner has been successfully tested on a number of theorems.
Author manuscript, published in "Higher Order and Symbolic Computation (2007)" Explaining the lazy Krivine machine using explicit substitution and addresses
, 2007
"... Abstract. In a previous paper, Benaissa, Lescanne, and Rose, have extended the weak lambdacalculus of explicit substitution λσw with addresses, so that it gives an account of the sharing implemented by lazy functional language interpreters. We show in this paper that their calculus, called λσ a w, ..."
Abstract
 Add to MetaCart
Abstract. In a previous paper, Benaissa, Lescanne, and Rose, have extended the weak lambdacalculus of explicit substitution λσw with addresses, so that it gives an account of the sharing implemented by lazy functional language interpreters. We show in this paper that their calculus, called λσ a w, fits well to the lazy Krivine machine, which describes the core of a lazy (callbyneed) functional programming language implementation. The lazy Krivine machine implements term evaluation sharing, that is essential for efficiency of such languages. The originality of our proof is that it gives a very detailed account of the implemented strategy. 1