Results 1  10
of
17
Type Inference with Polymorphic Recursion
 Transactions on Programming Languages and Systems
, 1991
"... The DamasMilner Calculus is the typed Acalculus underlying the type system for ML and several other strongly typed polymorphic functional languages such as Mirandal and Haskell. Mycroft has extended its problematic monomorphic typing rule for recursive definitions with a polymorphic typing rule. H ..."
Abstract

Cited by 135 (0 self)
 Add to MetaCart
The DamasMilner Calculus is the typed Acalculus underlying the type system for ML and several other strongly typed polymorphic functional languages such as Mirandal and Haskell. Mycroft has extended its problematic monomorphic typing rule for recursive definitions with a polymorphic typing rule. He proved the resulting type system, which we call the MilnerMycroft Calculus, sound with respect to Milner’s semantics, and showed that it preserves the principal typing property of the DamasMilner Calculus. The extension is of practical significance in typed logic programming languages and, more generally, in any language with (mutually) recursive definitions. In this paper we show that the type inference problem for the MilnerMycroft Calculus is logspace equivalent to semiunification, the problem of solving subsumption inequations between firstorder terms. This result has been proved independently by Kfoury et al. In connection with the recently established undecidability of semiunification this implies that typability in the MilnerMycroft Calculus is undecidable. We present some reasons why type inference with polymorphic recursion appears to be practical despite its undecidability. This also sheds some light on the observed practicality of ML
Efficient Type Inference for HigherOrder BindingTime Analysis
 In Functional Programming and Computer Architecture
, 1991
"... Bindingtime analysis determines when variables and expressions in a program can be bound to their values, distinguishing between early (compiletime) and late (runtime) binding. Bindingtime information can be used by compilers to produce more efficient target programs by partially evaluating prog ..."
Abstract

Cited by 91 (4 self)
 Add to MetaCart
Bindingtime analysis determines when variables and expressions in a program can be bound to their values, distinguishing between early (compiletime) and late (runtime) binding. Bindingtime information can be used by compilers to produce more efficient target programs by partially evaluating programs at compiletime. Bindingtime analysis has been formulated in abstract interpretation contexts and more recently in a typetheoretic setting. In a typetheoretic setting bindingtime analysis is a type inference problem: the problem of inferring a completion of a λterm e with bindingtime annotations such that e satisfies the typing rules. Nielson and Nielson and Schmidt have shown that every simply typed λterm has a unique completion ê that minimizes late binding in TML, a monomorphic type system with explicit bindingtime annotations, and they present exponential time algorithms for computing such minimal completions. 1 Gomard proves the same results for a variant of his twolevel λcalculus without a socalled “lifting ” rule. He presents another algorithm for inferring completions in this somewhat restricted type system and states that it can be implemented in time O(n 3). He conjectures that the completions computed are minimal.
A direct algorithm for type inference in the rank2 fragment of the secondorder λcalculus
, 1993
"... We study the problem of type inference for a family of polymorphic type disciplines containing the power of CoreML. This family comprises all levels of the stratification of the secondorder lambdacalculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 o ..."
Abstract

Cited by 78 (14 self)
 Add to MetaCart
We study the problem of type inference for a family of polymorphic type disciplines containing the power of CoreML. This family comprises all levels of the stratification of the secondorder lambdacalculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 of this stratification. While it was already known that typability is decidable at rank 2, no direct and easytoimplement algorithm was available. To design such an algorithm, we develop a new notion of reduction and show howto use it to reduce the problem of typability at rank 2 to the problem of acyclic semiunification. A byproduct of our analysis is the publication of a simple solution procedure for acyclic semiunification.
Polymorphic Recursion and Subtype Qualifications: Polymorphic BindingTime Analysis in Polynomial Time
 Static Analysis, Second International Symposium, number 983 in Lecture
"... Abstract. The combination of parameter polymorphism, subtyping extended to qualified and polymorphic types, and polymorphic recursion is useful in standard type inference and gives expressive typebased program analyses, but raises difficult algorithmic problems. In a program analysis context we sho ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Abstract. The combination of parameter polymorphism, subtyping extended to qualified and polymorphic types, and polymorphic recursion is useful in standard type inference and gives expressive typebased program analyses, but raises difficult algorithmic problems. In a program analysis context we show how Mycroft’s iterative method of computing principal types for a type system with polymorphic recursion can be generalized and adapted to work in a setting with subtyping. This does not only yield a proof of existence of principal types (most general properties), but also an algorithm for computing them. The punchline of the development is that a very simple modification of the basic algorithm reduces its computational complexity from exponential time to polynomial time relative to the size of the given, explicitly typed program. This solves the open problem of finding an inference algorithm for polymorphic bindingtime analysis [7]. 1
Sound, Complete and Scalable PathSensitive Analysis ∗
"... We present a new, precise technique for fully path and contextsensitive program analysis. Our technique exploits two observations: First, using quantified, recursive formulas, path and contextsensitive conditions for many program properties can be expressed exactly. To compute a closed form soluti ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We present a new, precise technique for fully path and contextsensitive program analysis. Our technique exploits two observations: First, using quantified, recursive formulas, path and contextsensitive conditions for many program properties can be expressed exactly. To compute a closed form solution to such recursive constraints, we differentiate between observable and unobservable variables, the latter of which are existentially quantified in our approach. Using the insight that unobservable variables can be eliminated outside a certain scope, our technique computes satisfiabilityand validitypreserving closedform solutions to the original recursive constraints. We prove the solution is as precise as the original system for answering may and must queries as well as being small in practice, allowing our technique to scale to the entire Linux kernel, a program with over 6 million lines of code. D.2.4 [Software Engineer
Fast leftlinear semiunification
 In Proc. Int’l. Conf. on Computing and Information
, 1990
"... Semiunification is a generalization of both unification and matching with applications in proof theory, term rewriting systems, polymorphic type inference, and natural language processing. It is the problem of solving a set of term inequalities M1 ≤ N1,..., Mk ≤ Nk, where ≤ is interpreted as the su ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Semiunification is a generalization of both unification and matching with applications in proof theory, term rewriting systems, polymorphic type inference, and natural language processing. It is the problem of solving a set of term inequalities M1 ≤ N1,..., Mk ≤ Nk, where ≤ is interpreted as the subsumption preordering on (firstorder) terms. Whereas the general problem has recently been shown to be undecidable, several special cases are decidable. Kfoury, Tiuryn, and Urzyczyn proved that leftlinear semiunification (LLSU) is decidable by giving an exponential time decision procedure. We improve their result as follows. 1. We present a generic polynomialtime algorithm L1 for LLSU, which shows that LLSU is in P. 2. We show that L1 can be implemented in time O(n 3) by using a fast dynamic transitive closure algorithm. 3. We prove that LLSU is Pcomplete under logspace reductions, thus giving evidence that there are no fast (NCclass) parallel algorithms for LLSU.
A GPU Implementation of Inclusionbased Pointsto Analysis ∗
"... Graphics Processing Units (GPUs) have emerged as powerful accelerators for many regular algorithms that operate on dense arrays and matrices. In contrast, we know relatively little about using GPUs to accelerate highly irregular algorithms that operate on pointerbased data structures such as graphs ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Graphics Processing Units (GPUs) have emerged as powerful accelerators for many regular algorithms that operate on dense arrays and matrices. In contrast, we know relatively little about using GPUs to accelerate highly irregular algorithms that operate on pointerbased data structures such as graphs. For the most part, research has focused on GPU implementations of graph analysis algorithms that do not modify the structure of the graph, such as algorithms for breadthfirst search and stronglyconnected components. In this paper, we describe a highperformance GPU implementation of an important graph algorithm used in compilers such as gcc and LLVM: Andersenstyle inclusionbased pointsto analysis. This algorithm is challenging to parallelize effectively on GPUs because it makes extensive modifications to the structure of the underlying graph and performs relatively little computation. In spite of this, our program, when executed on a 14 Streaming Multiprocessor GPU, achieves an average speedup of 7x compared to a sequential CPU implementation and outperforms a parallel implementation of the same algorithm running on 16 CPU cores. Our implementation provides general insights into how to produce highperformance GPU implementations of graph algorithms, and it highlights key differences between optimizing parallel programs for multicore CPUs and for GPUs.
Polymorphic Type Checking by Interpretation of Code
, 1992
"... The type system of most modern functional programming languages is based on Milner's polymorphism. A compiler or interpreter usually checks (or infers) the types of functions and other values by directly inspecting the source code of a program. Here, another approach is taken: The program is firs ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The type system of most modern functional programming languages is based on Milner's polymorphism. A compiler or interpreter usually checks (or infers) the types of functions and other values by directly inspecting the source code of a program. Here, another approach is taken: The program is first translated into code for a stack machine and then a nonstandard interpreter applied to this code checks (or infers) the type of the corresponding values. This can be seen as an abstract interpretation of the object code of the program. 1 Introduction In the early days of Functional Programming in the 1960's, functional programming languages did not have any proper concept of type; they (Lisp, ISWIM [10]) were typeless. Classical, monomorphic typesystems are restrictive in the sense that they only support the solution of concrete problems, but not problem schemes. But it is characteristic for the style of Functional Programming to solve problems in an abstract way, and thus a monomor...
Weak Subsumption Constraints for Type Diagnosis: An Incremental Algorithm (Extended Abstract)
 In Joint COMPULOGNET /ELSNET/EAGLES Workshop on Computational Logic for Natural Language Processing
, 1995
"... ) Martin Muller Joachim Niehren y German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3, 66123 Saarbrucken, Germany fmmueller,niehreng@dfki.unisb.de March 15, 1995 Abstract We introduce constraints necessary for type checking a higherorder concurrent constraint lan ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
) Martin Muller Joachim Niehren y German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3, 66123 Saarbrucken, Germany fmmueller,niehreng@dfki.unisb.de March 15, 1995 Abstract We introduce constraints necessary for type checking a higherorder concurrent constraint language, and solve them with an incremental algorithm. Our constraint system extends rational unification by constraints x`y saying that "x has at least the structure of y", modelled by a weak instance relation between trees. This notion of instance has been carefully chosen to be weaker than the usual one which renders semiunification undecidable. Semiunification has more than once served to link unification problems arising from type inference and those considered in computational linguistics. Just as polymorphic recursion corresponds to subsumption through the semiunification problem, our type constraint problem corresponds to weak subsumption of feature graphs in linguistics. The decida...
Documentation for polyrec_sml: An Extension SML With Typechecking For Polymorphic Recursion
, 1995
"... this documentation. The fault for remaining errors remains with the author. The implementation described here was created with the support of DFG project `Semiunifikation' Le 788/12. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
this documentation. The fault for remaining errors remains with the author. The implementation described here was created with the support of DFG project `Semiunifikation' Le 788/12.