Results 1  10
of
40
Equational Problems and Disunification
 Journal of Symbolic Computation
, 1989
"... Roughly speaking, an equational problem is a first order formula whose only predicate symbol is =. We propose some rules for the transformation of equational problems and study their correctness in various models. Then, we give completeness results with respect to some “simple ” problems called solv ..."
Abstract

Cited by 106 (9 self)
 Add to MetaCart
Roughly speaking, an equational problem is a first order formula whose only predicate symbol is =. We propose some rules for the transformation of equational problems and study their correctness in various models. Then, we give completeness results with respect to some “simple ” problems called solved forms. Such completeness results still hold when adding some control which moreover ensures termination. The termination proofs are given for a “weak ” control and thus hold for the (large) class of algorithms obtained by restricting the scope of the rules. Finally, it must be noted that a byproduct of our method is a decision procedure for the validity in the Herbrand Universe of any
Theorem Proving Modulo
 Journal of Automated Reasoning
"... Abstract. Deduction modulo is a way to remove computational arguments from proofs by reasoning modulo a congruence on propositions. Such a technique, issued from automated theorem proving, is of much wider interest because it permits to separate computations and deductions in a clean way. The first ..."
Abstract

Cited by 75 (14 self)
 Add to MetaCart
Abstract. Deduction modulo is a way to remove computational arguments from proofs by reasoning modulo a congruence on propositions. Such a technique, issued from automated theorem proving, is of much wider interest because it permits to separate computations and deductions in a clean way. The first contribution of this paper is to define a sequent calculus modulo that gives a proof theoretic account of the combination of computations and deductions. The congruence on propositions is handled via rewrite rules and equational axioms. Rewrite rules apply to terms and also directly to atomic propositions. The second contribution is to give a complete proof search method, called Extended Narrowing and Resolution (ENAR), for theorem proving modulo such congruences. The completeness of this method is proved with respect to provability in sequent calculus modulo. An important application is that higherorder logic can be presented as a theory modulo. Applying the Extended Narrowing and Resolution method to this presentation of higherorder logic subsumes full higherorder resolution.
Explaining Type Inference
 Science of Computer Programming
, 1995
"... Type inference is the compiletime process of reconstructing missing type information in a program based on the usage of its variables. ML and Haskell are two languages where this aspect of compilation has enjoyed some popularity, allowing type information to be omitted while static type checking is ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
Type inference is the compiletime process of reconstructing missing type information in a program based on the usage of its variables. ML and Haskell are two languages where this aspect of compilation has enjoyed some popularity, allowing type information to be omitted while static type checking is still performed. Type inference may be expected to have some application in the prototyping and scripting languages which are becoming increasingly popular. A difficulty with type inference is the confusing and sometimes counterintuitive diagnostics produced by the type checker as a result of type errors. A modification of the HindleyMilner type inference algorithm is presented, which allows the specific reasoning which led to a program variable having a particular type to be recorded for type explanation. This approach is close to the intuitive process used in practice for debugging type errors. 1 Introduction Type inference refers to the compiletime process of reconstructing missing t...
Pure Pattern Type Systems
 In POPL’03
, 2003
"... We introduce a new framework of algebraic pure type systems in which we consider rewrite rules as lambda terms with patterns and rewrite rule application as abstraction application with builtin matching facilities. This framework, that we call “Pure Pattern Type Systems”, is particularly wellsuite ..."
Abstract

Cited by 43 (20 self)
 Add to MetaCart
We introduce a new framework of algebraic pure type systems in which we consider rewrite rules as lambda terms with patterns and rewrite rule application as abstraction application with builtin matching facilities. This framework, that we call “Pure Pattern Type Systems”, is particularly wellsuited for the foundations of programming (meta)languages and proof assistants since it provides in a fully unified setting higherorder capabilities and pattern matching ability together with powerful type systems. We prove some standard properties like confluence and subject reduction for the case of a syntactic theory and under a syntactical restriction over the shape of patterns. We also conjecture the strong normalization of typable terms. This work should be seen as a contribution to a formal connection between logics and rewriting, and a step towards new proof engines based on the CurryHoward isomorphism.
Polymorphism and Type Inference in Database Programming
"... In order to find a static type system that adequately supports database languages, we need to express the most general type of a program that involves database operations. This can be achieved through an extension to the type system of ML that captures the polymorphic nature of field selection, toge ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
In order to find a static type system that adequately supports database languages, we need to express the most general type of a program that involves database operations. This can be achieved through an extension to the type system of ML that captures the polymorphic nature of field selection, together with a technique that generalizes relational operators to arbitrary data structures. The combination provides a statically typed language in which generalized relational databases may be cleanly represented as typed structures. As in ML types are inferred, which relieves the programmer of making the type assertions that may be required in a complex database environment. These extensions may also be used to provide static polymorphic typechecking in objectoriented languages and databases. A problem that arises with objectoriented databases is the apparent need for dynamic typechecking when dealing with queries on heterogeneous collections of objects. An extension of the type system needed for generalized relational operations can also be used for manipulating collections of dynamically typed values in a statically typed language. A prototype language based on these ideas has been implemented. While it lacks a proper treatment of persistent data, it demonstrates that a wide variety of database structures can be cleanly represented in a polymorphic programming language.
On Girard’s “Candidats de Réductibilité
 Logic and Computer Science
, 1990
"... Abstract: We attempt to elucidate the conditions required on Girard’s candidates of reducibility (in French, “candidats de reductibilité”) in order to establish certain properties of various typed lambda calculi, such as strong normalization and ChurchRosser property. We present two generalizations ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
Abstract: We attempt to elucidate the conditions required on Girard’s candidates of reducibility (in French, “candidats de reductibilité”) in order to establish certain properties of various typed lambda calculi, such as strong normalization and ChurchRosser property. We present two generalizations of the candidates of reducibility, an untyped version in the line of Tait and Mitchell, and a typed version which is an adaptation of Girard’s original method. As an application of this general result, we give two proofs of strong normalization for the secondorder polymorphic lambda calculus under ⌘reduction (and thus underreduction). We present two sets of conditions for the typed version of the candidates. The first set consists of conditions similar to those used by Stenlund (basically the typed version of Tait’s conditions), and the second set consists of Girard’s original conditions. We also compare these conditions, and prove that Girard’s conditions are stronger than Tait’s conditions. We give a new proof of the ChurchRosser theorem for bothreduction and ⌘reduction, using the modified version of Girard’s method. We also compare various proofs that have appeared in the literature (see section 11). We conclude by sketching the extension of the above results to Girard’s higherorder polymorphic calculus F!, and in appendix 1, to F! with product types. i 1
Combining HigherOrder and FirstOrder Computation Using ρcalculus: Towards a Semantics of ELAN
 In Frontiers of Combining Systems 2
, 1999
"... The ρcalculus permits to express in a uniform and simple way firstorder rewriting, λcalculus and nondeterministic computations as well as their combination. In this paper, we present the main components of the ρcalculus and we give a full firstorder presentation of this rewriting calculus using ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
The ρcalculus permits to express in a uniform and simple way firstorder rewriting, λcalculus and nondeterministic computations as well as their combination. In this paper, we present the main components of the ρcalculus and we give a full firstorder presentation of this rewriting calculus using an explicit substitution setting, called ρσ, that generalizes the λσcalculus. The basic properties of the nonexplicit and explicit substitution versions are presented. We then detail how to use the ρcalculus to give an operational semantics to the rewrite rules of the ELAN language. 1
Rewrite Strategies in the Rewriting Calculus
 WRS 2003
, 2003
"... This paper presents an overview on the use of the rewriting calculus to express rewrite strategies. We motivate first the use of rewrite strategies by examples in the ELAN language. We then show how this has been modeled in the initial version of the rewriting calculus and how the matching power of ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
This paper presents an overview on the use of the rewriting calculus to express rewrite strategies. We motivate first the use of rewrite strategies by examples in the ELAN language. We then show how this has been modeled in the initial version of the rewriting calculus and how the matching power of this framework facilitates the representation of powerful strategies.
Antipattern matching
 In European Symposium on Programming – ESOP 2007, LNCS
, 2007
"... Abstract. It is quite appealing to base the description of patternbased searches on positive as well as negative conditions. We would like for example to specify that we search for white cars that are not station wagons. To this end, we define the notion of antipatterns and their semantics along w ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Abstract. It is quite appealing to base the description of patternbased searches on positive as well as negative conditions. We would like for example to specify that we search for white cars that are not station wagons. To this end, we define the notion of antipatterns and their semantics along with some of their properties. We then extend the classical notion of matching between patterns and ground terms to matching between antipatterns and ground terms. We provide a rulebased algorithm that finds the solutions to such problems and prove its correctness and completeness. Antipattern matching is by nature different from disunification and quite interestingly the antipattern matching problem is unitary. Therefore the concept is appropriate to ground a powerful extension to patternbased programming languages and we show how this is used to extend the expressiveness and usability of the Tom language. 1
Collecting More Garbage
 LISP 94
, 1994
"... We present a method, adapted to polymorphically typed functional languages, to detect and collect more garbage than existing GCs. It can be applied to strict or lazy higher order languages and to several garbage collection schemes. Our GC exploits the information on utility of arguments provided by ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We present a method, adapted to polymorphically typed functional languages, to detect and collect more garbage than existing GCs. It can be applied to strict or lazy higher order languages and to several garbage collection schemes. Our GC exploits the information on utility of arguments provided by polymorphic types of functions. It is able to detect garbage that is still referenced from the stack and may collect useless parts of otherwise useful data structures. We show how to partially collect shared data structures and to extend the type system to infer more precise information. We also present how this technique can plug several common forms of space leaks.