Results 1  10
of
14
The complexity of type inference for higherorder typed lambda calculi
 In. Proc. 18th ACM Symposium on the Principles of Programming Languages
, 1991
"... We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential ..."
Abstract

Cited by 31 (12 self)
 Add to MetaCart
(Show Context)
We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential time, and for Fa the problem is nonelementary. We show as well a sequence of lower bounds on recognising the i^typable terms, where the bound for Fk+1 is exponentially larger than that for Fk. The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Nonaccepting computations are mapped to nonnormalising reduction sequences, and hence nontypable terms. The accepting computations are mapped to typable terms, where higherorder types encode reduction sequences, and firstorder types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges. These results are the first nontrivial lower bounds on type inference for the Girard/Reynolds
Relating Typability and Expressiveness in FiniteRank Intersection Type Systems (Extended Abstract)
 In Proc. 1999 Int’l Conf. Functional Programming
, 1999
"... We investigate finiterank intersection type systems, analyzing the complexity of their type inference problems and their relation to the problem of recognizing semantically equivalent terms. Intersection types allow something of type T1 /\ T2 to be used in some places at type T1 and in other places ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
We investigate finiterank intersection type systems, analyzing the complexity of their type inference problems and their relation to the problem of recognizing semantically equivalent terms. Intersection types allow something of type T1 /\ T2 to be used in some places at type T1 and in other places at type T2 . A finiterank intersection type system bounds how deeply the /\ can appear in type expressions. Such type systems enjoy strong normalization, subject reduction, and computable type inference, and they support a pragmatics for implementing parametric polymorphism. As a consequence, they provide a conceptually simple and tractable alternative to the impredicative polymorphism of System F and its extensions, while typing many more programs than the HindleyMilner type system found in ML and Haskell. While type inference is computable at every rank, we show that its complexity grows exponentially as rank increases. Let K(0, n) = n and K(t + 1, n) = 2^K(t,n); we prove that recognizing the pure lambdaterms of size n that are typable at rank k is complete for dtime[K(k1, n)]. We then consider the problem of deciding whether two lambdaterms typable at rank k have the same normal form, Generalizing a wellknown result of Statman from simple types to finiterank intersection types. ...
COMPLEXITY HIERARCHIES BEYOND ELEMENTARY
"... Abstract. We introduce a hierarchy of fastgrowing complexity classes and show its suitability for completeness statements of many non elementary problems. This hierarchy allows the classification of many decision problems with a nonelementary complexity, which occur naturally in logic, combinato ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Abstract. We introduce a hierarchy of fastgrowing complexity classes and show its suitability for completeness statements of many non elementary problems. This hierarchy allows the classification of many decision problems with a nonelementary complexity, which occur naturally in logic, combinatorics, formal languages, verification, etc., with complexities ranging from simple towers of exponentials to Ackermannian and beyond. 1.
Upper Bounds for Standardizations and an Application
 The Journal of Symbolic Logic
, 1996
"... We first present a new proof for the standardization theorem, a fundamental theorem in calculus. Since our proof is largely built upon structural induction on lambda terms, we can extract some bounds for the number of fireduction steps in the standard fireduction sequences obtained from transfor ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We first present a new proof for the standardization theorem, a fundamental theorem in calculus. Since our proof is largely built upon structural induction on lambda terms, we can extract some bounds for the number of fireduction steps in the standard fireduction sequences obtained from transforming any given fireduction sequences. This result sharpens the standardization theorem and establishes a link between lazy and eager evaluation orders in the context of computational complexity. As an application, we establish a superexponential bound for the number of fireduction steps in fireduction sequences from any given simply typed terms. 1 Introduction The standardization theorem of Curry and Feys [CF58] is a very useful result, stating that if u reduces to v for terms u and v, then there is a standard fireduction from u to v. Using this theorem, we can readily prove the normalization theorem, i.e., a term has a normal form if and only if the leftmost fireduction sequence f...
Perpetual Reductions in λCalculus
, 1999
"... This paper surveys a part of the theory of fireduction in λcalculus which might aptly be called perpetual reductions. The theory is concerned with perpetual reduction strategies, i.e., reduction strategies that compute infinite reduction paths from λterms (when possible), and with perpetual r ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This paper surveys a part of the theory of fireduction in λcalculus which might aptly be called perpetual reductions. The theory is concerned with perpetual reduction strategies, i.e., reduction strategies that compute infinite reduction paths from λterms (when possible), and with perpetual redexes, i.e., redexes whose contraction in λterms preserves the possibility (when present) of infinite reduction paths. The survey not only recasts classical theorems in a unified setting, but also offers new results, proofs, and techniques, as well as a number of applications to problems in λcalculus and type theory.
Extracting Herbrand Disjunctions by Functional Interpretation
"... Abstract. Carrying out a suggestion by Kreisel, we adapt Gödel’s functional interpretation to ordinary firstorder predicate logic(PL) and thus devise an algorithm to extract Herbrand terms from PLproofs. The extraction is carried out in an extension of PL to higher types. The algorithm consists of ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Carrying out a suggestion by Kreisel, we adapt Gödel’s functional interpretation to ordinary firstorder predicate logic(PL) and thus devise an algorithm to extract Herbrand terms from PLproofs. The extraction is carried out in an extension of PL to higher types. The algorithm consists of two main steps: first we extract a functional realizer, next we compute the βnormalform of the realizer from which the Herbrand terms can be read off. Even though the extraction is carried out in the extended language, the terms are ordinary PLterms. In contrast to approaches to Herbrand’s theorem based on cut elimination or εelimination this extraction technique is, except for the normalization step, of low polynomial complexity, fully modular and furthermore allows an analysis of the structure of the Herbrand terms, in the spirit of Kreisel ([13]), already prior to the normalization step. It is expected that the implementation of functional interpretation in Schwichtenberg’s MINLOG system can be adapted to yield an efficient Herbrandterm extraction tool. 1.
P = NP, up to sharing
"... We prove that we may compute the normal form of each term of the simply typed  calculus in a polynomial number of sharable reductions (where the notion of sharing is L'evy's "optimal" one). As a simple corollary, we get that P = NP "up to sharing", i.e. up to the comp ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We prove that we may compute the normal form of each term of the simply typed  calculus in a polynomial number of sharable reductions (where the notion of sharing is L'evy's "optimal" one). As a simple corollary, we get that P = NP "up to sharing", i.e. up to the computational overhead due to sharing. 1 Introduction We prove that we may compute the normal form of each term of the simply typed calculus (up to jequivalence) in a polynomial number of sharable reductions. The notion of sharing is L'evy's one [Le78], commonly known as "optimal" sharing. The general idea (see Section 1.1 for the formal definition) is to formalize duplication of redexes as residuals modulo permutations. In particular, a redex u with history oe (notation oeu) is a copy of a redex v with history ae iff aev oeu (i.e., there exists ø such that oe = aeø up to permutation equivalence, and u is a residual of v after ø ). The family relation ' is then the symmetric and transitive closure of the copyrelation. ...
Filter models: nonidempotent intersection types, orthogonality and polymorphism
"... This paper revisits models of typed λcalculus based on filters of intersection types: By using nonidempotent intersections, we simplify a methodology that produces modular proofs of strong normalisation based on filter models. Nonidempotent intersections provide a decreasing measure proving a key ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper revisits models of typed λcalculus based on filters of intersection types: By using nonidempotent intersections, we simplify a methodology that produces modular proofs of strong normalisation based on filter models. Nonidempotent intersections provide a decreasing measure proving a key termination property, simpler than the reducibility techniques used with idempotent intersections. Such filter models are shown to be captured by orthogonality techniques: we formalise an abstract notion of orthogonality model inspired by classical realisability, and express a filter model as one of its instances, along with two termmodels (one of which captures a now common technique for strong normalisation). Applying the above range of model constructions to Currystyle System F describes at different levels of detail how the infinite polymorphism of System F can systematically be reduced to the finite polymorphism of intersection types.
0 Least Upper Bounds on the Size of Confluence and ChurchRosser Diagrams in Term Rewriting and λCalculus 1
"... We study confluence and the ChurchRosser property in term rewriting and λcalculus with explicit bounds on term sizes and reduction lengths. Given a system R, we are interested in the lengths of the reductions in the smallest valleys t → ∗ s ′ ∗ ← t ′ expressed as a function: — for confluence a ..."
Abstract
 Add to MetaCart
We study confluence and the ChurchRosser property in term rewriting and λcalculus with explicit bounds on term sizes and reduction lengths. Given a system R, we are interested in the lengths of the reductions in the smallest valleys t → ∗ s ′ ∗ ← t ′ expressed as a function: — for confluence a function vsR(m, n) where the valleys are for peaks t ∗ ← s → ∗ t ′ with s of size at most m and the reductions of maximum length n, and — for the ChurchRosser property a function cvsR(m, n) where the valleys are for conversions t ↔ ∗ t ′ with t and t ′ of size at most m and the conversion of maximum length n. For confluent term rewriting systems (TRSs), we prove that vsR is a total computable function, and for linear such systems that cvsR is a total computable function. Conversely, we show that every total computable function is the lower bound on the functions vsR(m, n) and cvsR(m, n) for some TRS R: In particular, we show that for every total computable function ϕ: N − → N there is a TRS R with a single term s such that vsR(s, n) ≥ ϕ(n) and cvsR(n, n) ≥ ϕ(n) for all n. For orthogonal TRSs R we prove that there is a constant k such that (a) vsR(m, n) is bounded from above by a function exponential in k and (b) cvsR(m, n) is bounded from above by a function in the fourth level of the Grzegorczyk hierarchy. Similarly, for λcalculus, we show that vsR(m, n) is bounded from above by a function in the fourth level of the Grzegorczyk hierarchy.
LOGIC & COMPUTATION 38
, 1991
"... We examine the problem of finding fully abstract translations between programming languages, i.e., translations that preserve code equivalence and nonequivalence. We present three examples of fully abstract translations: one from callbyvalue to lazy PCF, one from callbyname to callbyvalue PCF, ..."
Abstract
 Add to MetaCart
(Show Context)
We examine the problem of finding fully abstract translations between programming languages, i.e., translations that preserve code equivalence and nonequivalence. We present three examples of fully abstract translations: one from callbyvalue to lazy PCF, one from callbyname to callbyvalue PCF, and one from lazy to callbyvalue PCF. The translations yield upper and lower bounds on decision procedures for proving equivalences of code. We finally define a notion of "functional translation " that captures the essence of the proofs of full abstraction, and show that some languages cannot be translated into others. 1