Results 1  10
of
34
Efficient Type Inference for HigherOrder BindingTime Analysis
 In Functional Programming and Computer Architecture
, 1991
"... Bindingtime analysis determines when variables and expressions in a program can be bound to their values, distinguishing between early (compiletime) and late (runtime) binding. Bindingtime information can be used by compilers to produce more efficient target programs by partially evaluating prog ..."
Abstract

Cited by 98 (4 self)
 Add to MetaCart
Bindingtime analysis determines when variables and expressions in a program can be bound to their values, distinguishing between early (compiletime) and late (runtime) binding. Bindingtime information can be used by compilers to produce more efficient target programs by partially evaluating programs at compiletime. Bindingtime analysis has been formulated in abstract interpretation contexts and more recently in a typetheoretic setting. In a typetheoretic setting bindingtime analysis is a type inference problem: the problem of inferring a completion of a λterm e with bindingtime annotations such that e satisfies the typing rules. Nielson and Nielson and Schmidt have shown that every simply typed λterm has a unique completion ê that minimizes late binding in TML, a monomorphic type system with explicit bindingtime annotations, and they present exponential time algorithms for computing such minimal completions. 1 Gomard proves the same results for a variant of his twolevel λcalculus without a socalled “lifting ” rule. He presents another algorithm for inferring completions in this somewhat restricted type system and states that it can be implemented in time O(n 3). He conjectures that the completions computed are minimal.
Checking NFA equivalence with bisimulations up to congruence
"... Abstract—We introduce bisimulation up to congruence as a technique for proving language equivalence of nondeterministic finite automata. Exploiting this technique, we devise an optimisation of the classical algorithm by Hopcroft and Karp [12] that, instead of computing the whole determinised automa ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce bisimulation up to congruence as a technique for proving language equivalence of nondeterministic finite automata. Exploiting this technique, we devise an optimisation of the classical algorithm by Hopcroft and Karp [12] that, instead of computing the whole determinised automata, explores only a small portion of it. Although the optimised algorithm remains exponential in worst case (the problem is PSPACEcomplete), experimental results show improvements of several orders of magnitude over the standard algorithm. I.
Type inference and semiunification
 In Proceedings of the ACM Conference on LISP and Functional Programming (LFP ) (Snowbird
, 1988
"... In the last ten years declarationfree programming languages with a polymorphic typing discipline (ML, B) have been developed to approximate the flexibility and conciseness of dynamically typed languages (LISP, SETL) while retaining the safety and execution efficiency of conventional statically type ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
In the last ten years declarationfree programming languages with a polymorphic typing discipline (ML, B) have been developed to approximate the flexibility and conciseness of dynamically typed languages (LISP, SETL) while retaining the safety and execution efficiency of conventional statically typed languages (Algol68, Pascal). These polymorphic languages can be type checked at compile time, yet allow functions whose arguments range over a variety of types. We investigate several polymorphic type systems, the most powerful of which, termed MilnerMycroft Calculus, extends the socalled letpolymorphism found in, e.g., ML with a polymorphic typing rule for recursive definitions. We show that semiunification, the problem of solving inequalities over firstorder terms, characterizes type checking in the MilnerMycroft Calculus to polynomial time, even in the restricted case where nested definitions are disallowed. This permits us to extend some infeasibility results for related combinatorial problems to type inference and to correct several claims and statements in the literature. We prove the existence of unique most general solutions of term inequalities, called most general semiunifiers, and present an algorithm for computing them that terminates for all known inputs due to a novel “extended occurs check”. We conjecture this algorithm to be
A Study of Semantics, Types, and Languages for Databases and Object Oriented Programming
, 1989
"... The purpose of this thesis is to investigate a type system for databases and objectoriented programming and to design a statically typed programming language for these applications. Such a language should ideally have a static type system that supports: • polymorphism and static type inference, • r ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The purpose of this thesis is to investigate a type system for databases and objectoriented programming and to design a statically typed programming language for these applications. Such a language should ideally have a static type system that supports: • polymorphism and static type inference, • rich data structures and operations to represent various data models for databases including the relational model and more recent complex object models, • central features of objectoriented programming including user definable class hierarchies, multiple inheritance, and data abstraction, • the notion of extents and objectidentities for objectoriented databases. Without a proper formalism, it is not obvious that the construction of such a type system is possible. This thesis attempts to construct one such formalism and proposes a programming language that uniformly integrate all of the above features. The specific contributions of this thesis include: • A simple semantics for ML polymorphism and axiomatization of the equational theory of ML. • A uniform generalization of the relational model to arbitrary complex database objects that
A coalgebraic decision procedure for NetKAT
, 2014
"... Program equivalence is a fundamental problem that has practical applications across a variety of areas of computing including compilation, optimization, software synthesis, formal verification, and many others. Equivalence is undecidable in general, but in certain settings it is possible to develop ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Program equivalence is a fundamental problem that has practical applications across a variety of areas of computing including compilation, optimization, software synthesis, formal verification, and many others. Equivalence is undecidable in general, but in certain settings it is possible to develop domainspecific languages that are expressive enough to be practical and yet sufficiently restricted so that equivalence remains decidable. In previous work we introduced NetKAT, a domainspecific language for specifying and verifying network packetprocessing functions. NetKAT provides familiar constructs such as tests, assignments, union, sequential composition, and iteration as well as custom primitives for modifying packet headers and encoding network topologies. Semantically, NetKAT is based on Kleene algebra with tests (KAT) and comes equipped with a sound and complete equational theory. Although NetKAT equivalence is decidable, the best known algorithm is hardly practical—it uses Savitch’s theorem to determinize a PSPACE algorithm and requires quadratic space. This paper presents a new algorithm for deciding NetKAT equivalence. This algorithm is based on finding bisimulations between finite automata constructed from NetKAT programs. We investigate the coalgebraic theory of NetKAT, generalize the notion of Brzozowski derivatives to NetKAT, develop efficient representations of NetKAT automata in terms of spines and sparse matrices, and discuss the highlights of our prototype implementation. 1.
Antimirov and Mosses’s Rewrite System Revisited
, 2008
"... Antimirov and Mosses proposed a rewrite system for deciding the equivalence of two (extended) regular expressions. In this paper we present a functional approach to that method, prove its correctness, and give some experimental comparative results. Besides an improved version of Antimirov and Mosses ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Antimirov and Mosses proposed a rewrite system for deciding the equivalence of two (extended) regular expressions. In this paper we present a functional approach to that method, prove its correctness, and give some experimental comparative results. Besides an improved version of Antimirov and Mosses’s algorithm, we present a version using partial derivatives. Our preliminary results lead to the conclusion that, indeed, these methods are feasible and, generally, faster than the classical methods.
Symbolic Algorithms for Language Equivalence and Kleene Algebra with Tests
"... We first propose algorithms for checking language equivalence of finite automata over a large alphabet. We use symbolic automata, where the transition function is compactly represented using a (multiterminal) binary decision diagrams (BDD). The key idea consists in computing a bisimulation by explo ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We first propose algorithms for checking language equivalence of finite automata over a large alphabet. We use symbolic automata, where the transition function is compactly represented using a (multiterminal) binary decision diagrams (BDD). The key idea consists in computing a bisimulation by exploring reachable pairs symbolically, so as to avoid redundancies. This idea can be combined with already existing optimisations, and we show in particular a nice integration with the disjoint sets forest datastructure from Hopcroft and Karp’s standard algorithm. Then we consider Kleene algebra with tests (KAT), an algebraic theory that can be used for verification in various domains ranging from compiler optimisation to network programming analysis. This theory is decidable by reduction to language equivalence of automata on guarded strings, a particular kind of automata that have exponentially large alphabets. We propose several methods allowing to construct symbolic automata out of KAT expressions, based either on Brzozowski’s derivatives or standard automata constructions. All in all, this results in efficient algorithms for deciding equivalence of KAT expressions.
Automatic Sequences and ZipSpecifications
 IN PROC. SYMP. ON LOGIC IN COMPUTER SCIENCE (LICS 2012). IEEE COMPUTER SOCIETY
, 2013
"... ..."
Enhanced Coalgebraic Bisimulation
, 2013
"... We present a systematic study of bisimulationupto techniques for coalgebras. This enhances the bisimulation proof method for a large class of state based systems, including labelled transition systems but also stream systems and weighted automata. Our approach allows for compositional reasoning ab ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
We present a systematic study of bisimulationupto techniques for coalgebras. This enhances the bisimulation proof method for a large class of state based systems, including labelled transition systems but also stream systems and weighted automata. Our approach allows for compositional reasoning about the soundness of enhancements. Applications include the soundness of bisimulation up to bisimilarity, up to equivalence and up to congruence. All in all, this gives a powerful and modular framework for simplified coinductive proofs of equivalence. 1.
Expressibility in the LambdaCalculus with Letrec
, 2012
"... We investigate the relationship between finite terms in λletrec, the lambda calculus with letrec, and the infinite lambda terms they express. As there are easy examples of infinite λterms that, intuitively, are not unfoldings of terms in λletrec, we consider the question: How can those infinite lam ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We investigate the relationship between finite terms in λletrec, the lambda calculus with letrec, and the infinite lambda terms they express. As there are easy examples of infinite λterms that, intuitively, are not unfoldings of terms in λletrec, we consider the question: How can those infinite lambda terms be characterised that are λletrecexpressible in the sense that they can be obtained as infinite unfoldings of terms in λletrec? For ‘observing ’ infinite λterms through repeated ‘experiments ’ carried out at the head of the term we introduce two rewrite systems (with rewrite relations) →reg and →reg+ that decompose the term structure, and produce ‘generated subterms ’ in two notions. Thereby the sort of the step can be observed as well as its target, a generated subterm. In both systems there are four sorts of decomposition steps: →λsteps (decomposing a λabstraction),