Results 1  10
of
11
An imperative object calculus
 PROC. TAPSOFT'95
, 1995
"... We develop an imperative calculus of objects. Its main type constructor is the one for object types, which incorporate variance annotations and Self types. A subtyping relation between object types supports object subsumption. The type system for objects relies on unusual but beneficial assumptions ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
We develop an imperative calculus of objects. Its main type constructor is the one for object types, which incorporate variance annotations and Self types. A subtyping relation between object types supports object subsumption. The type system for objects relies on unusual but beneficial assumptions about the possible subtypes of an object type. With the addition of polymorphism, the calculus can express classes and inheritance.
DemandDriven Type Inference with Subgoal Pruning: Trading Precision for Scalability
, 2004
"... After two decades of effort, type inference for dynamically typed languages scales to programs of a few tens of thousands of lines of code, but no further. For larger programs, this paper proposes using a kind of demanddriven analysis where the number of active goals is carefully restricted. To ach ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
After two decades of effort, type inference for dynamically typed languages scales to programs of a few tens of thousands of lines of code, but no further. For larger programs, this paper proposes using a kind of demanddriven analysis where the number of active goals is carefully restricted. To achieve this restriction, the algorithm occasionally prunes goals by giving them solutions that are trivially true and thus require no further subgoals to be solved; the previous subgoals of a newly pruned goal may often be discarded from consideration, reducing the total number of active goals. A specific algorithm DDP is described which uses this approach. An experiment on DDP shows that it infers precise types for roughly 30 % to 45 % of the variables in a program with hundreds of thousands of lines; the percentage varies with the choice of pruning threshold, a parameter of the algorithm. The time required varies from an average of onetenth of one second per variable to an unknown maximum, again depending on the pruning threshold. These data suggest that 50 and 2000 are both good choices of pruning threshold, depending on whether speed or precision is more important.
Relaxing the value restriction
 In International Symposium on Functional and Logic Programming, Nara, LNCS 2998
, 2004
"... Abstract. Restricting polymorphism to values is now the standard way to obtain soundness in MLlike programming languages with imperative features. While this solution has undeniable advantages over previous approaches, it forbids polymorphism in many cases where it would be sound. We use a subtypin ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Abstract. Restricting polymorphism to values is now the standard way to obtain soundness in MLlike programming languages with imperative features. While this solution has undeniable advantages over previous approaches, it forbids polymorphism in many cases where it would be sound. We use a subtyping based approach to recover part of this lost polymorphism, without changing the type algebra itself, and this has significant applications. 1
Certification of a type inference tool for ML: DamasMilner within Coq
 Journal of Automated Reasoning
, 1999
"... . We develop a formal proof of the ML type inference algorithm, within the Coq proof assistant. We are much concerned with methodology and reusability of such a mechanization. This proof is also necessary to hope the certification of a complete ML compiler in the future. In this paper we present th ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
. We develop a formal proof of the ML type inference algorithm, within the Coq proof assistant. We are much concerned with methodology and reusability of such a mechanization. This proof is also necessary to hope the certification of a complete ML compiler in the future. In this paper we present the Coq formalization of the typing system and its inference algorithm. We establish formally the correctness and the completeness of the type inference algorithm with respect to the typing rules of the language. We describe and comment the mechanized proofs. 1. Introduction Our goal is to realize a verified formal proof of the ML type inference algorithm, within the Coq proof assistant. Though this algorithm has been proved since quite a long time, this proof had never been mechanized entirely up to now. Simultaneously and independently of our work, D. Nazareth and T. Nipkow have carried out such a formal verification in the theorem prover Isabelle/HOL for simplytyped terms [11] and then ...
An Imperative Object Calculus  Basic Typing and Soundness
 DEPARTMENT OF COMPUTER SCIENCE, UNIVERSITY OF ILLINOIS AT URBANACHAMPAIGN
, 1995
"... We develop an imperative calculus of objects that is both tiny and expressive. Our calculus provides a minimal setting in which to study the operational semantics and the typing rules of objectoriented languages. We prove type soundness using a simple subjectreduction approach. ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We develop an imperative calculus of objects that is both tiny and expressive. Our calculus provides a minimal setting in which to study the operational semantics and the typing rules of objectoriented languages. We prove type soundness using a simple subjectreduction approach.
Proving ML type soundness within Coq
 In Proc. TPHOLs ’00
, 2000
"... Abstract. We verify within the Coq proof assistant that ML typing is sound with respect to the dynamic semantics. We prove this property in the framework of a big step semantics and also in the framework of a reduction semantics. For that purpose, we use a syntaxdirected version of the typing rules ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. We verify within the Coq proof assistant that ML typing is sound with respect to the dynamic semantics. We prove this property in the framework of a big step semantics and also in the framework of a reduction semantics. For that purpose, we use a syntaxdirected version of the typing rules: we prove mechanically its equivalence with the initial type system provided by Damas and Milner. This work is complementary to the certification of the ML type inference algorithm done previously by the author and Valérie MénissierMorain. 1
A Typed Approach to Layered Programming Language Design
, 1994
"... ion Functional Language Layer Loops and Procedures Application Layer Library Layer Figure 1: The Layered Design of the Id Language. imperative datastructures (Istructures and Mstructures [2, 3]) that cater to imperative styles of programming. Id is a layered language by design (see Figure 1). The ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
ion Functional Language Layer Loops and Procedures Application Layer Library Layer Figure 1: The Layered Design of the Id Language. imperative datastructures (Istructures and Mstructures [2, 3]) that cater to imperative styles of programming. Id is a layered language by design (see Figure 1). The primitive Istructure and Mstructure datatypes are the basic memory synchronization mechanisms in the Id kernel language, and are used to represent all higher level datastructures such as arrays, lists, tuples and userdefined algebraic types. Special syntactic constructs, such as list and array comprehensions and pattern matching, are also compiled into primitive operations on kernel datatypes. Several Id libraries provide the desired functionality for these higher level datastructures. While this design helps to keep the language kernel very small and efficient, it becomes very important to clearly define and enforce the type and data abstractions between the layers so that polymorphic,...
Reflections on the Design of a Specification Language
 Proc. Intl. Colloq. on Fundamental Approaches to Software Engineering. European Joint Conferences on Theory and Practice of Software (ETAPS'98), Lisbon. Springer LNCS 1382
, 1998
"... We reflect on our experiences from work on the design and semantic underpinnings of Extended ML, a specification language which supports the specification and formal development of Standard ML programs. Our aim is to isolate problems and issues that are intrinsic to the general enterprise of designi ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We reflect on our experiences from work on the design and semantic underpinnings of Extended ML, a specification language which supports the specification and formal development of Standard ML programs. Our aim is to isolate problems and issues that are intrinsic to the general enterprise of designing a specification language for use with a given programming language. Consequently the lessons learned go far beyond our original aim of designing a specification language for ML.
unknown title
, 2013
"... Types ouverts de fermetures, et une application au typage des flots de données ..."
Abstract
 Add to MetaCart
Types ouverts de fermetures, et une application au typage des flots de données
Binary Reachability Analysis of Higher Order Functional Programs
"... Abstract. A number of recent approaches for proving program termination rely on transition invariants – a termination argument that can be constructed incrementally using abstract interpretation. These approaches use binary reachability analysis to check if a candidate transition invariant holds for ..."
Abstract
 Add to MetaCart
Abstract. A number of recent approaches for proving program termination rely on transition invariants – a termination argument that can be constructed incrementally using abstract interpretation. These approaches use binary reachability analysis to check if a candidate transition invariant holds for a given program. For imperative programs, its efficient implementation can be obtained by a reduction to reachability analysis, for which practical tools are available. In this paper, we show how a binary reachability analysis can be put to work for proving termination of higher order functional programs. 1