Results 1  10
of
37
Polymorphic Type, Region and Effect Inference
, 1991
"... We present a new static system that reconstructs the types, regions and effects of expressions in an implicitly typed functional language that supports imperative operations on reference values. Just as types structurally abstract collections of concrete values, regions represent sets of possibly a ..."
Abstract

Cited by 121 (6 self)
 Add to MetaCart
We present a new static system that reconstructs the types, regions and effects of expressions in an implicitly typed functional language that supports imperative operations on reference values. Just as types structurally abstract collections of concrete values, regions represent sets of possibly aliased reference values and effects represent approximations of the imperative behavior on regions. We introduce a static semantics for inferring types, regions and effects and prove that it is consistent with respect to the dynamic semantics of the language. We present a reconstruction algorithm that computes the types and effects of expressions and assigns regions to reference values. We prove the correctness of the reconstruction algorithm with respect to the static semantics. Finally, we discuss potential applications of our system to automatic stack allocation and parallel code generation.
Type Inference for Records in a Natural Extension of ML
 Theoretical Aspects of ObjectOriented Programming: Types, Semantics, and Language Design
, 1994
"... We describe an extension of ML with records where inheritance is given by ML generic polymorphism. All common operations on records but concatenation are supported, in particular the free extension of records. Other operations such as renaming of fields are added. The solution relies on an extension ..."
Abstract

Cited by 73 (7 self)
 Add to MetaCart
We describe an extension of ML with records where inheritance is given by ML generic polymorphism. All common operations on records but concatenation are supported, in particular the free extension of records. Other operations such as renaming of fields are added. The solution relies on an extension of ML, where the language of types is sorted and considered modulo equations, and on a record extension of types. The solution is simple and modular and the type inference algorithm is efficient in practice.
Programming in Equational Logic: Beyond Strong Sequentiality
, 1993
"... Orthogonal term rewriting systems (also known as regular systems) provide an elegant framework for programming in equational logic. O'Donnell showed that the paralleloutermost strategy, which replaces all outermost redexes in each step, is complete for such systems. Many of the reductions performed ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Orthogonal term rewriting systems (also known as regular systems) provide an elegant framework for programming in equational logic. O'Donnell showed that the paralleloutermost strategy, which replaces all outermost redexes in each step, is complete for such systems. Many of the reductions performed by this strategy could be wasteful in general. A lazy normalization algorithm that completely eliminated these wasteful reductions by reducing only "needed redexes" was later developed by Huet and Levy. However, this algorithm required the input programs to be restricted to the subclass of strongly sequential systems. This is because needed redexes do not exist for all orthogonal programs, and even when they do, they may not be computable. It is therefore quite natural to ask whether it is possible to devise a complete normalization algorithm for the entire class that minimizes (rather than eliminate) the wasteful reductions. In this paper we propose a solution to this problem using the concept of a necessary set of redexes. In such a set, at least one of the redexes must be reduced to normalize a term. We devise an algorithm to compute a necessary set for any term not in normal form and show that a strategy that repeatedly reduces all redexes in such a set is complete for orthogonal programs. We also show that our algorithm is "optimal" among all normalization algorithms that are based on lefthand sides alone. This means that our algorithm is lazy (like HuetLevy's) on strongly sequential parts of a program and "relaxes laziness minimally" to handle the other parts and thus does not sacrifice generality for the sake of efficiency.
Typing Record Concatenation for Free
, 1992
"... We show that any functional language with record extension possesses record concatenation for free. We exhibit a translation from the latter into the former. We obtain a type system for a language with record concatenation by composing the translation with typechecking in a language with record exte ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
We show that any functional language with record extension possesses record concatenation for free. We exhibit a translation from the latter into the former. We obtain a type system for a language with record concatenation by composing the translation with typechecking in a language with record extension. We apply this method to a version of ML with record extension and obtain an extension of ML with either asymmetric or symmetric concatenation. The latter extension is simple, flexible and has a very efficient type inference algorithm in practice. Concatenation together with removal of fields needs one more construct than extension of records. It can be added to the version of ML with record extension. However, many typed languages with record cannot type such a construct. The method still applies to them, producing type systems for record concatenation without removal of fields. Object systems also benefit of the encoding which shows that multiple inheritance does not actually require...
Semantic Representations And Query Languages For OrSets
 IN PROCEEDINGS OF 12TH ACM SYMPOSIUM ON PRINCIPLES OF DATABASE SYSTEMS
, 1993
"... Orsets were introduced by Imielinski, Naqvi and Vadaparty for dealing with limited forms of disjunctive information in database queries. Independently, Rounds used a similar notion for representing disjunctive and conjunctive information in the context of situation theory. In this paper we formulat ..."
Abstract

Cited by 29 (18 self)
 Add to MetaCart
Orsets were introduced by Imielinski, Naqvi and Vadaparty for dealing with limited forms of disjunctive information in database queries. Independently, Rounds used a similar notion for representing disjunctive and conjunctive information in the context of situation theory. In this paper we formulate a query language with adequate expressive power for orsets. Using the notion of normalization of orsets, queries at the "structural" and "conceptual" levels are distinguished. Losslessness of normalization is established for a large class of queries. We have obtained upper bounds for the cost of normalization. An approach related to that of Rounds is used to provide semantics for orsets. We also treat orsets in the context of partial information in databases.
The complexity of type inference for higherorder typed lambda calculi
 In. Proc. 18th ACM Symposium on the Principles of Programming Languages
, 1991
"... We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential time, and for Fa the problem is nonelementary. We show as well a sequence of lower bounds on recognising the i^typable terms, where the bound for Fk+1 is exponentially larger than that for Fk. The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Nonaccepting computations are mapped to nonnormalising reduction sequences, and hence nontypable terms. The accepting computations are mapped to typable terms, where higherorder types encode reduction sequences, and firstorder types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges. These results are the first nontrivial lower bounds on type inference for the Girard/Reynolds
Database Query Languages Embedded in the Typed Lambda Calculus
, 1993
"... We investigate the expressive power of the typed calculus when expressing computations over finite structures, i.e., databases. We show that the simply typed calculus can express various database query languages such as the relational algebra, fixpoint logic, and the complex object algebra. In ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
We investigate the expressive power of the typed calculus when expressing computations over finite structures, i.e., databases. We show that the simply typed calculus can express various database query languages such as the relational algebra, fixpoint logic, and the complex object algebra. In our embeddings, inputs and outputs are terms encoding databases, and a program expressing a query is a term which types when applied to an input and reduces to an output.
Adaptive Pattern Matching
, 1992
"... Pattern matching is an important operation used in many applications such as functional programming, rewriting and rulebased expert systems. By preprocessing the patterns into a DFAlike automaton, we can rapidly select the matching pattern(s) in a single scan of the relevant portions of the inp ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Pattern matching is an important operation used in many applications such as functional programming, rewriting and rulebased expert systems. By preprocessing the patterns into a DFAlike automaton, we can rapidly select the matching pattern(s) in a single scan of the relevant portions of the input term. This automaton is typically based on lefttoright traversal of the patterns. By adapting the traversal order to suit the set of input patterns, it is possible to considerably reduce the space and matching time requirements of the automaton.
An applicative module calculus
 In Theory and Practice of Software Development 97, Lecture Notes in Computer Science
, 1997
"... Abstract. The SMLlike module systems are small typed languages of their own. As is, one would expect a proof of their soundness following from a proof of subject reduction. Unfortunately, the subjectreduction property and the preservation of type abstraction seem to be incompatible. As a consequen ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. The SMLlike module systems are small typed languages of their own. As is, one would expect a proof of their soundness following from a proof of subject reduction. Unfortunately, the subjectreduction property and the preservation of type abstraction seem to be incompatible. As a consequence, in relevant module systems, the theoretical study of reductions is meaningless, and for instance, the question of normalization of module expressions can not even be considered. In this paper, we analyze this problem as a misunderstanding of the notion of module definition. We build a variant of the SML module system — inspired from recent works by Leroy, Harper, and Lillibridge — which enjoys the subject reduction property. Type abstraction — achieved through an explicit declaration of the signature of a module at its definition — is preserved. This was the initial motivation. Besides our system enjoys other typetheoretic properties: the calculus is strongly normalizing, there are no syntactic restrictions on module paths, it enjoys a purely applicative semantics, every module has a principal type, and type inference is decidable. Neither Leroy’s system nor Harper and Lillibridge’s system has all of them. 1