Results 1  10
of
24
Extensional equivalence and singleton types
 ACM Transactions on Computational Logic
"... We study the λΠΣS ≤ calculus, which contains singleton types S(M) classifying terms of base type provably equivalent to the term M. The system includes dependent types for pairs and functions (Σ and Π) and a subtyping relation induced by regarding singletons as subtypes of the base type. The decidab ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
We study the λΠΣS ≤ calculus, which contains singleton types S(M) classifying terms of base type provably equivalent to the term M. The system includes dependent types for pairs and functions (Σ and Π) and a subtyping relation induced by regarding singletons as subtypes of the base type. The decidability of type checking for this language is nonobvious, since to type check we must be able to determine equivalence of wellformed terms. But in the presence of singleton types, the provability of an equivalence judgment Γ ⊢ M1 ≡ M2: A can depend both on the typing context Γ and on the particular type A at which M1 and M2 are compared. We show how to prove decidability of term equivalence, hence of type checking, in λΠΣS ≤ by exhibiting a typedirected algorithm for directly computing normal forms. The correctness of normalization is shown using an unusual variant of Kripke logical relations organized around sets; rather than defining a logical equivalence relation, we work directly with (subsets of) the corresponding equivalence classes. We then provide a more efficient algorithm for checking type equivalence without constructing normal forms. We also show that type checking, subtyping, and all other judgments of the system are decidable.
A proof of strong normalisation using domain theory
 IN LICS’06
, 2006
"... U. Berger, [11] significantly simplified Tait’s normalisation proof for bar recursion [27], see also [9], replacing Tait’s introduction of infinite terms by the construction of a domain having the property that a term is strongly normalizing if its semantics is. The goal of this paper is to show tha ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
U. Berger, [11] significantly simplified Tait’s normalisation proof for bar recursion [27], see also [9], replacing Tait’s introduction of infinite terms by the construction of a domain having the property that a term is strongly normalizing if its semantics is. The goal of this paper is to show that, using ideas from the theory of intersection types [2, 6, 7, 21] and MartinLöf’s domain interpretation of type theory [18], we can in turn simplify U. Berger’s argument in the construction of such a domain model. We think that our domain model can be used to give modular proofs of strong normalization for various type theory. As an example, we show in some details how it can be used to prove strong normalization for MartinLöf dependent type theory extended with bar recursion, and with some form of proofirrelevance.
Untyped algorithmic equality for MartinLöf’s logical framework with surjective pairs (extended version
, 2005
"... Abstract. An untyped algorithm to test βηequality for MartinLöf’s Logical Framework with strong Σtypes is presented and proven complete using a model of partial equivalence relations between untyped terms. 1 ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Abstract. An untyped algorithm to test βηequality for MartinLöf’s Logical Framework with strong Σtypes is presented and proven complete using a model of partial equivalence relations between untyped terms. 1
Working with Mathematical Structures in Type Theory
"... Abstract. We address the problem of representing mathematical structures in a proof assistant which: 1) is based on a type theory with dependent types, telescopes and a computational version of Leibniz equality; 2) implements coercive subtyping, accepting multiple coherent paths between type familie ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We address the problem of representing mathematical structures in a proof assistant which: 1) is based on a type theory with dependent types, telescopes and a computational version of Leibniz equality; 2) implements coercive subtyping, accepting multiple coherent paths between type families; 3) implements a restricted form of higher order unification and type reconstruction. We show how to exploit the previous quite common features to reduce the “syntactic ” gap between pen&paper and formalised algebra. However, to reach our goal we need to propose unification and type reconstruction heuristics that are slightly different from the ones usually implemented. We have implemented them in Matita. 1
Manifest fields and module mechanisms in intensional type theory
 In TYPES 08
, 2009
"... Abstract. Manifest fields in a type of modules are shown to be expressible in intensional type theory without strong extensional equality rules. These intensional manifest fields are made available with the help of coercive subtyping. It is shown that, for both Σtypes and dependent record types, th ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Manifest fields in a type of modules are shown to be expressible in intensional type theory without strong extensional equality rules. These intensional manifest fields are made available with the help of coercive subtyping. It is shown that, for both Σtypes and dependent record types, the withclause for expressing manifest fields can be introduced by means of the intensional manifest fields. This provides not only a higherorder module mechanism with MLstyle sharing, but a powerful modelling mechanism in formalisation and verification of OOstyle program modules. 1
A modular typechecking algorithm for type theory with singleton types and proof irrelevance
 IN TLCA’09, VOLUME 5608 OF LNCS
, 2009
"... ..."
(Show Context)
A framework for defining logical frameworks
 University of Udine
, 2006
"... Replace this file with prentcsmacro.sty for your meeting, or with entcsmacro.sty for your meeting. Both can be ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Replace this file with prentcsmacro.sty for your meeting, or with entcsmacro.sty for your meeting. Both can be
Verifying haskell programs by combining testing and proving
 In Proceedings of the Third International Conference on Quality Software
"... We propose a method for improving confidence in the correctness of Haskell programs by combining testing and proving. Testing is used for debugging programs and specification before a costly proof attempt. During a proof development, testing also quickly eliminates wrong conjectures. Proving helps u ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose a method for improving confidence in the correctness of Haskell programs by combining testing and proving. Testing is used for debugging programs and specification before a costly proof attempt. During a proof development, testing also quickly eliminates wrong conjectures. Proving helps us to decompose a testing task in a way that is guaranteed to be correct. To demonstrate the method we have extended the Agda/Alfa proof assistant for dependent type theory with a tool for random testing. As an example we show how the correctness of a BDDalgorithm written in Haskell is verified by testing properties of component functions. We also discuss faithful translations from Haskell to type theory.
Dependent Record Types Revisited
"... Dependentlytyped records have been studied in type theory in several previous research attempts, with applications to the study of module mechanisms for both programming and proof languages. Recently, the author has proposed an improved formulation of dependent record types in the context of studyi ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Dependentlytyped records have been studied in type theory in several previous research attempts, with applications to the study of module mechanisms for both programming and proof languages. Recently, the author has proposed an improved formulation of dependent record types in the context of studying manifest fields of module types. In this paper, we study this formulation in more details by considering universes of record types and some application examples. In particular, we show that record types provide a more powerful mechanism (than record kinds) in expressing module types and additional useful means (as compared with Σtypes) in applications. 1.
Towards a formal view of corrective feedback
"... This paper introduces a formal view of the semantics and pragmatics of corrective feedback in dialogues between adults and children. The goal of this research is to give a formal account of language coordination in dialogue, and semantic coordination in particular. Accounting for semantic coordinati ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces a formal view of the semantics and pragmatics of corrective feedback in dialogues between adults and children. The goal of this research is to give a formal account of language coordination in dialogue, and semantic coordination in particular. Accounting for semantic coordination requires (1) a semantics, i.e. an architecture allowing for dynamic meanings and meaning updates as results of dialogue moves, and (2) a pragmatics, describing the dialogue moves involved in semantic coordination. We illustrate the general approach by applying it to some examples from the literature on corrective feedback, and provide a fairly detailed discussion of one example using TTR (Type Theory with Records) to formalize concepts. TTR provides an analysis of linguistic content which is structured in order to allow modification and similarity metrics, and a framework for describing dialogue moves and resulting updates to linguistic resources. 1