Results 1  10
of
10
Principal Typings for Javalike Languages
 In ACM Symp. on Principles of Programming Languages 2004
, 2004
"... The contribution of the paper is twofold. First, we define a general notion of type system equipped with an entailment relation between type environments; this generalisation serves as a pattern for instantiating type systems able to support separate compilation and interchecking of Javalike langua ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
The contribution of the paper is twofold. First, we define a general notion of type system equipped with an entailment relation between type environments; this generalisation serves as a pattern for instantiating type systems able to support separate compilation and interchecking of Javalike languages, and allows a formal definition of soundess and completeness of interchecking w.r.t. global compilation. These properties are important in practice since they allow selective recompilation. In particular, we show that they are guaranteed when the type system has principal typings and provides sound and complete entailment relation between type environments.
Expansion: the Crucial Mechanism for Type Inference with Intersection Types: Survey and Explanation
 In: (ITRS ’04
, 2005
"... The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying out compositional type inference. The fundamental idea of expansion is to be able to calculate the effect on the final judgement of a typing derivation of inserting a use of the intersectionintroduction typing rule at some (possibly deeply nested) position, without actually needing to build the new derivation.
Type Inference with Expansion Variables and Intersection Types in System E and an Exact Correspondence with βReduction
 In Proc. 6th Int’l Conf. Principles & Practice Declarative Programming
"... System E is a recently designed type system for the # calculus with intersection types and expansion variables. During automatic type inference, expansion variables allow postponing decisions about which nonsyntaxdriven typing rules to use until the right information is available and allow imple ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
System E is a recently designed type system for the # calculus with intersection types and expansion variables. During automatic type inference, expansion variables allow postponing decisions about which nonsyntaxdriven typing rules to use until the right information is available and allow implementing the choices via substitution.
Types, potency, and idempotency: why nonlinearity and amnesia make a type system work
 In ICFP ’04: Proceedings of the ninth ACM SIGPLAN international conference on Functional programming, 138–149, ACM
, 2004
"... Useful type inference must be faster than normalization. Otherwise, you could check safety conditions by running the program. We analyze the relationship between bounds on normalization and type inference. We show how the success of type inference is fundamentally related to the amnesia of the type ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Useful type inference must be faster than normalization. Otherwise, you could check safety conditions by running the program. We analyze the relationship between bounds on normalization and type inference. We show how the success of type inference is fundamentally related to the amnesia of the type system: the nonlinearity by which all instances of a variable are constrained to have the same type. Recent work on intersection types has advocated their usefulness for static analysis and modular compilation. We analyze SystemI (and some instances of its descendant, System E), an intersection type system with a type inference algorithm. Because SystemI lacks idempotency, each occurrence of a variable requires a distinct type. Consequently, type inference is equivalent to normalization in every single case, and time bounds on type inference and normalization are identical. Similar relationships hold for other intersection type systems without idempotency. The analysis is founded on an investigation of the relationship between linear logic and intersection types. We show a lockstep correspondence between normalization and type inference. The latter shows the promise of intersection types to facilitate static analyses of varied granularity, but also belies an immense challenge: to add amnesia to such analysis without losing all of its benefits.
Rank 2 Intersection Types for Modules
, 2003
"... We propose a rank 2 intersection type system for a language of modules built on a core MLlike language. The principal typing property of the rank 2 intersection type system for the core language plays a crucial role in the design of the type system for the module language. We first consider a "plai ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We propose a rank 2 intersection type system for a language of modules built on a core MLlike language. The principal typing property of the rank 2 intersection type system for the core language plays a crucial role in the design of the type system for the module language. We first consider a "plain" notion of module, where a module is just a set of mutually recursive toplevel definitions, and illustrate the notions of: module intrachecking (each module is typechecked in isolation and its interface, which is the set of typings of the defined identifiers, is inferred); interface interchecking (when linking modules, typechecking is done just by looking at the interfaces); interface specialization (interface intrachecking may require to specialize the typing listed in the interfaces); principal interfaces (the principal typing property for the type system of modules); and separate typechecking (looking at the code of the modules does not provide more type information than looking at their interfaces). Then we illustrate some limitations of the "plain" framework and extend the module language and the type system in order to overcome these limitations. The decidability of the system is shown by providing algorithms for the fundamental operations involved in module intrachecking and interface interchecking.
Strict Intersection Types for the Lambda Calculus
, 2010
"... This paper will show the usefulness and elegance of strict intersection types for the Lambda Calculus; these are strict in the sense that they are the representatives of equivalence classes of types in the BCDsystem [15]. We will focus on the essential intersection type assignment; this system is a ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
This paper will show the usefulness and elegance of strict intersection types for the Lambda Calculus; these are strict in the sense that they are the representatives of equivalence classes of types in the BCDsystem [15]. We will focus on the essential intersection type assignment; this system is almost syntax directed, and we will show that all major properties hold that are known to hold for other intersection systems, like the approximation theorem, the characterisation of (head/strong) normalisation, completeness of type assignment using filter semantics, strong normalisation for cutelimination and the principal pair property. In part, the proofs for these properties are new; we will briefly compare the essential system with other existing systems.
Rank2 Intersection and Polymorphic Recursion
 In TLCA’05, volume 2841 of LNCS
, 2005
"... Let # be a rank2 intersection type system. We say that a term is #simple (or just simple when the system # is clear from the context) if system # can prove that it has a simple type. In this paper we propose new typing rules and algorithms that are able to type recursive definitions that are n ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Let # be a rank2 intersection type system. We say that a term is #simple (or just simple when the system # is clear from the context) if system # can prove that it has a simple type. In this paper we propose new typing rules and algorithms that are able to type recursive definitions that are not simple. At the best of our knowledge, previous algorithms for typing recursive definitions in the presence of rank2 intersection types allow only simple recursive definitions to be typed. The proposed rules are also able to type interesting examples of polymorphic recursion (i.e., recursive definitions rec {x = e} where di#erent occurrences of x in e are used with di#erent types). Moreover, the underlying techniques do not depend on particulars of rank2 intersection, so they can be applied to other type systems.
Rank 2 Intersection Types for Modules (Version with conclusions and appendices of [11])
"... We propose a rank 2 intersection type system for a language of modules built on a core MLlike language. The principal typing property of the rank 2 intersection type system for the core language plays a crucial role in the design of the type system for the module language. We first consider a "plai ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We propose a rank 2 intersection type system for a language of modules built on a core MLlike language. The principal typing property of the rank 2 intersection type system for the core language plays a crucial role in the design of the type system for the module language. We first consider a "plain" notion of module, where a module is just a set of mutually recursive toplevel definitions, and illustrate the notions of: module intrachecking (each module is typechecked in isolation and its interface, which is the set of typings of the defined identifiers, is inferred); interface interchecking (when linking modules, typechecking is done just by looking at the interfaces); interface specialization (interface intrachecking may require to specialize the typing listed in the interfaces); principal interfaces (the principal typing property for the type system of modules); and separate typechecking (looking at the code of the modules does not provide more type information than looking at their interfaces). Then we illustrate some limitations of the "plain" framework and extend the module language and the type system in order to overcome these limitations. The decidability of the system is shown by providing algorithms for the fundamental operations involved in module intrachecking and interface interchecking.
Contents
, 2005
"... A summary of the motivation and theory behind abstract interpretation, including the accumulating semantics, Galois connections and widening. A complete demonstration of the use of abstract interpretation to define a safe and optimal sign analysis in the context of a simple imperative language is pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A summary of the motivation and theory behind abstract interpretation, including the accumulating semantics, Galois connections and widening. A complete demonstration of the use of abstract interpretation to define a safe and optimal sign analysis in the context of a simple imperative language is presented. In addition, a example of widening is described to improve the termination properties of an interval analysis of the same language. Keywords: • Semantics • Program analysis
Project funded by the European Community under the ‘Information Society Technologies’
"... this paper we show how they can be recast into algorithmic inference rules by applying a combination of standard techniques, among which constraint handling plays a central role. We then analyse the generated constraint sets and show that they share a common structure which can be simplified with a ..."
Abstract
 Add to MetaCart
this paper we show how they can be recast into algorithmic inference rules by applying a combination of standard techniques, among which constraint handling plays a central role. We then analyse the generated constraint sets and show that they share a common structure which can be simplified with a specialised algorithm we present. As the constraint simplification procedure is mainly based on unification, we implemented the whole inference algorithm in a Prolog program which we will comment upon