Results 1 
8 of
8
Bisimilarity for a FirstOrder Calculus of Objects with Subtyping
 In Proceedings of the TwentyThird Annual ACM Symposium on Principles of Programming Languages
, 1996
"... Bisimilarity (also known as `applicative bisimulation ') has attracted a good deal of attention as an operational equivalence for calculi. It approximates or even equals Morrisstyle contextual equivalence and admits proofs of program equivalence via coinduction. It has an elementary construction ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
Bisimilarity (also known as `applicative bisimulation ') has attracted a good deal of attention as an operational equivalence for calculi. It approximates or even equals Morrisstyle contextual equivalence and admits proofs of program equivalence via coinduction. It has an elementary construction from the operational definition of a language. We consider bisimilarity for one of the typed object calculi of Abadi and Cardelli. By defining a labelled transition system for the calculus in the style of Crole and Gordon and using a variation of Howe's method we establish two central results: that bisimilarity is a congruence, and that it equals contextual equivalence. So two objects are bisimilar iff no amount of programming can tell them apart. Our third contribution is to show that bisimilarity soundly models the equational theory of Abadi and Cardelli. This is the first study of contextual equivalence for an object calculus and the first application of Howe's method to subtyping. By the...
A Congruence Theorem for Structured Operational Semantics of HigherOrder Languages
, 1997
"... In this paper we describe the promoted tyft/tyxt rule format for defining higherorder languages. The rule format is a generalization of Groote and Vaandrager 's tyft/tyxt format in which terms are allowed as labels on transitions in rules. We prove that bisimulation is a congruence for any languag ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
In this paper we describe the promoted tyft/tyxt rule format for defining higherorder languages. The rule format is a generalization of Groote and Vaandrager 's tyft/tyxt format in which terms are allowed as labels on transitions in rules. We prove that bisimulation is a congruence for any language defined in promoted tyft/tyxt format and demonstrate the usefulness of the rule format by presenting promoted tyft/tyxt definitions for the lazy calculus, CHOCS and the ßcalculus. 1 Introduction For a programming language definition that uses bisimulation as the notion of equivalence, it is desirable for the bisimulation relation to be compatible with the language constructs; i.e. that bisimulation be a congruence. Several rule formats have been defined, so that as long as a definition satisfies certain syntactic constraints, then the defined bisimulation relation is guaranteed to be a congruence. However these rule formats have not been widely used for defining languages with higher...
A Tutorial on Coinduction and Functional Programming
 IN GLASGOW FUNCTIONAL PROGRAMMING WORKSHOP
, 1994
"... Coinduction is an important tool for reasoning about unbounded structures. This tutorial explains the foundations of coinduction, and shows how it justifies intuitive arguments about lazy streams, of central importance to lazy functional programmers. We explain from first principles a theory based ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Coinduction is an important tool for reasoning about unbounded structures. This tutorial explains the foundations of coinduction, and shows how it justifies intuitive arguments about lazy streams, of central importance to lazy functional programmers. We explain from first principles a theory based on a new formulation of bisimilarity for functional programs, which coincides exactly with Morrisstyle contextual equivalence. We show how to prove properties of lazy streams by coinduction and derive Bird and Wadler's Take Lemma, a wellknown proof technique for lazy streams.
FUNDIO: A LambdaCalculus with a letrec, case, Constructors, and an IOInterface: Approaching a Theory of unsafePerformIO
, 2003
"... This paper proposes a nonstandard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language “FUNDIO”, which is also a callbyneed lambda calculus, is investigated. The syntax of “FUNDIO ” has case, letrec, constr ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper proposes a nonstandard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language “FUNDIO”, which is also a callbyneed lambda calculus, is investigated. The syntax of “FUNDIO ” has case, letrec, constructors and an IOinterface: its operational semantics is described by smallstep reductions. A contextual approximation and equivalence depending on the inputoutput behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of “FUNDIO ” and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation.
Thus this calculus has a potential to integrate nonstrict functional programming with a nondeterministic approach to inputoutput and also to provide a useful semantics for this combination.
It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIOsemantics. Of course, we do not address the typing problems the are involved in the usage of Haskell’s
unsafePerformIO.
The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Themes in Final Semantics
 Dipartimento di Informatica, Università di
, 1998
"... C'era una volta un re seduto in canap`e, che disse alla regina raccontami una storia. La regina cominci`o: "C'era una volta un re seduto in canap`e ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
C'era una volta un re seduto in canap`e, che disse alla regina raccontami una storia. La regina cominci`o: "C'era una volta un re seduto in canap`e
Denotational Semantics Using an OperationallyBased Term Model
 In Proc. 24th ACM Symposium on Principles of Programming Languages
, 1997
"... We introduce a method for proving the correctness of transformations of programs in languages like Scheme and ML. The method consists of giving the programs a denotational semantics in an operationallybased term model in which interaction is the basic observable, and showing that the transformation ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce a method for proving the correctness of transformations of programs in languages like Scheme and ML. The method consists of giving the programs a denotational semantics in an operationallybased term model in which interaction is the basic observable, and showing that the transformation is meaningpreserving. This allows us to consider correctness for programs that interact with their environment without terminating, and also for transformations that change the internal store behavior of the program. We illustrate the technique on one of the MeyerSieber examples, and we use it to prove the correctness of assignment elimination for Scheme. The latter is an important but subtle step for Scheme compilers; we believe ours is the first proof of its correctness. 1 Introduction Compilers for higherorder languages typically perform elaborate program transformations in order to improve performance. Such transformations often change the storage behavior of the program. For concre...
Elementary Order Theory
"... We review ordered sets. An order, or “comparison”, can be used to generate equivalences. We discuss inductively, and coinductively defined sets. Such sets arise naturally when defining, and reasoning about, programs and processes. We define proof principles for such sets. We use the principle of coi ..."
Abstract
 Add to MetaCart
We review ordered sets. An order, or “comparison”, can be used to generate equivalences. We discuss inductively, and coinductively defined sets. Such sets arise naturally when defining, and reasoning about, programs and processes. We define proof principles for such sets. We use the principle of coinduction to validate equivalences, and discuss when this is possible.
Scalable Detection of Similar Code: Techniques and Applications
, 2009
"... Similar code, also known as cloned code, commonly exists in large software. Studies show that code duplication can incur higher software maintenance cost and more software defects. Thus, detecting similar code and tracking its migration have many important applications, including program understandi ..."
Abstract
 Add to MetaCart
Similar code, also known as cloned code, commonly exists in large software. Studies show that code duplication can incur higher software maintenance cost and more software defects. Thus, detecting similar code and tracking its migration have many important applications, including program understanding, refactoring, optimization, and bug detection. This dissertation presents novel, general techniques for detecting and analyzing both syntactic and semantic code clones. The techniques can scalably and accurately detect clones based on various similarity definitions, including trees, graphs, and functional behavior. They also have the general capability to help reduce software defects and advance code reuse. Specifically, this dissertation makes the following main contributions: First, it presents Deckard, a treebased clone detection technique and tool. The key insight is that we accurately represent syntax trees and dependency graphs of a program as characteristic vectors in the Euclidean space and apply hashing algorithms to cluster similar vectors efficiently. Experiments show that Deckard scales to millions of lines of code with few false positives. In addition, Deckard is languageagnostic and easily parallelizable,