Results 1  10
of
19
Constructive Reals in Coq: Axioms and Categoricity
"... We describe a construction of the real numbers carried out in the Coq proof assistant. The basis is a set of axioms for the constructive real numbers as used in the FTA (Fundamental Theorem of Algebra) project, carried out at Nijmegen University. The aim of this work is to show that these axioms can ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We describe a construction of the real numbers carried out in the Coq proof assistant. The basis is a set of axioms for the constructive real numbers as used in the FTA (Fundamental Theorem of Algebra) project, carried out at Nijmegen University. The aim of this work is to show that these axioms can be satisfied, by constructing a model for them. Apart from that, we show the robustness of the set of axioms for constructive real numbers, by proving (in Coq) that any two models of it are isomorphic. Finally, we show that our axioms are equivalent to the set of axioms for constructive reals introduced by Bridges in [2]. The construction of the reals is done in the ‘classical way’: first the rational numbers are built and they are shown to be a (constructive) ordered field and then the constructive real numbers are introduced as the usual Cauchy completion of the rational numbers. 1
Verification of Java Bytecode using Analysis and Transformation of Logic Programs
 In Ninth International Symposium on Practical Aspects of Declarative Languages, number 4354 in LNCS
, 2007
"... Abstract. State of the art analyzers in the Logic Programming (LP) paradigm are nowadays mature and sophisticated. They allow inferring a wide variety of global properties including termination, bounds on resource consumption, etc. The aim of this work is to automatically transfer the power of such ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Abstract. State of the art analyzers in the Logic Programming (LP) paradigm are nowadays mature and sophisticated. They allow inferring a wide variety of global properties including termination, bounds on resource consumption, etc. The aim of this work is to automatically transfer the power of such analysis tools for LP to the analysis and verification of Java bytecode (jvml). In order to achieve our goal, we rely on wellknown techniques for metaprogramming and program specialization. More precisely, we propose to partially evaluate a jvml interpreter implemented in LP together with (an LP representation of) a jvml program and then analyze the residual program. Interestingly, at least for the examples we have studied, our approach produces very simple LP representations of the original jvml programs. This can be seen as a decompilation from jvml to highlevel LP source. By reasoning about such residual programs, we can automatically prove in the CiaoPP system some nontrivial properties of jvml programs such as termination, runtime error freeness and infer bounds on its resource consumption. We are not aware of any other system which is able to verify such advanced properties of Java bytecode. 1
Sets in Types, Types in Sets
 Proceedings of TACS'97
, 1997
"... . We present two mutual encodings, respectively of the Calculus of Inductive Constructions in ZermeloFraenkel set theory and the opposite way. More precisely, we actually construct two families of encodings, relating the number of universes in the type theory with the number of inaccessible cardina ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
. We present two mutual encodings, respectively of the Calculus of Inductive Constructions in ZermeloFraenkel set theory and the opposite way. More precisely, we actually construct two families of encodings, relating the number of universes in the type theory with the number of inaccessible cardinals in the set theory. The main result is that both hierarchies of logical formalisms interleave w.r.t. expressive power and thus are essentially equivalent. Both encodings are quite elementary: type theory is interpreted in set theory through a generalization of Coquand 's simple proofirrelevance interpretation. Set theory is encoded in type theory using a variant of Aczel's encoding; we have formally checked this last part using the Coq proof assistant. 1 Introduction This work is an attempt towards better understanding of the expressiveness of powerful type theories. We here investigate the Calculus of Inductive Constructions (CIC); this formalism is, with some variants, the one implemen...
Decompilation of Java Bytecode to Prolog by Partial Evaluation
, 2009
"... Reasoning about Java bytecode (JBC) is complicated due to its unstructured controlflow, the use of threeaddress code combined with the use of an operand stack, etc. Therefore, many static analyzers and model checkers for JBC first convert the code into a higherlevel representation. In contrast to ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
Reasoning about Java bytecode (JBC) is complicated due to its unstructured controlflow, the use of threeaddress code combined with the use of an operand stack, etc. Therefore, many static analyzers and model checkers for JBC first convert the code into a higherlevel representation. In contrast to traditional decompilation, such representation is often not Java source, but rather some intermediate language which is a good input for the subsequent phases of the tool. Interpretive decompilation consists in partially evaluating an interpreter for the compiled language (in this case JBC) written in a highlevel language w.r.t. the code to be decompiled. There have been proofsofconcept that interpretive decompilation is feasible, but there remain important open issues when it comes to decompile a real language such as JBC. This paper presents, to the best of our knowledge, the first modular scheme to enable interpretive decompilation of a realistic programming language to a highlevel representation, namely of JBC to Prolog. We introduce two notions of optimality which together require that decompilation does not generate code more than once for each program point. We demonstrate the impact of our modular approach and optimality issues on a series of realistic benchmarks. Decompilation times and decompiled program sizes are linear with the size of the input bytecode program. This demonstrates empirically the scalability of modular decompilation of JBC by partial evaluation.
Coherence and Transitivity in Coercive Subtyping
 Information and Computation
, 2001
"... Coercive subtyping is a general approach to subtyping, inheritance and abbreviation in dependent type theories. A vital requirement for coercive subtyping is that of coherence { computational uniqueness of coercions between any two types. In this paper, we develop techniques useful in proving cohere ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Coercive subtyping is a general approach to subtyping, inheritance and abbreviation in dependent type theories. A vital requirement for coercive subtyping is that of coherence { computational uniqueness of coercions between any two types. In this paper, we develop techniques useful in proving coherence and its related result on admissibility of transitivity and substitution. In particular, we consider suitable subtyping rules for types and types and prove its coherence and the admissibility of substitution and transitivity rules at the type level in the coercive subtyping framework. 1
Dependent Record Types, Subtyping and Proof Reutilization
"... . We present an example of formalization of systems of algebras using an extension of MartinLof's theory of types with record types and subtyping. This extension has been presented in [5]. In this paper we intend to illustrate all the features of the extended theory that we consider relevant for th ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. We present an example of formalization of systems of algebras using an extension of MartinLof's theory of types with record types and subtyping. This extension has been presented in [5]. In this paper we intend to illustrate all the features of the extended theory that we consider relevant for the task of formalizing algebraic constructions. We also provide code of the formalization as accepted by a type checker that has been implemented. 1. Introduction We shall use an extension of MartinLof's theory of logical types [14] with dependent record types and subtyping as the formal language in which constructions concerning systems of algebras are going to be represented. The original formulation of MartinLof's theory of types, from now on referred to as the logical framework, has been presented in [15, 7]. The system of types that this calculus embodies are the type Set (the type of inductively defined sets), dependent function types and for each set A, the type of the elements of A...
Structured induction proofs in Isabelle/Isar
 MATHEMATICAL KNOWLEDGE MANAGEMENT (MKM 2006), LNAI
, 2006
"... Isabelle/Isar is a generic framework for humanreadable formal proof documents, based on higherorder natural deduction. The Isar proof language provides general principles that may be instantiated to particular objectlogics and applications. We discuss specific Isar language elements that support ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Isabelle/Isar is a generic framework for humanreadable formal proof documents, based on higherorder natural deduction. The Isar proof language provides general principles that may be instantiated to particular objectlogics and applications. We discuss specific Isar language elements that support complex induction patterns of practical importance. Despite the additional bookkeeping required for induction with local facts and parameters, definitions, simultaneous goals and multiple rules, the resulting Isar proof texts turn out wellstructured and readable. Our techniques can be applied to nonstandard variants of induction as well, such as coinduction and nominal induction. This demonstrates that Isar provides a viable platform for building domainspecific tools that support fullyformal mathematical proof composition.
More On Implicit Syntax
 In Automated Reasoning. First International Joint Conference (IJCAR'01
"... Proof assistants based on type theories, such as Coq and Lego, allow users to omit subterms on input that can be inferred automatically. While those mechanisms are well known, adhoc algorithms are used to suppress subterms on output. As a result, terms might be printed identically although they di ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Proof assistants based on type theories, such as Coq and Lego, allow users to omit subterms on input that can be inferred automatically. While those mechanisms are well known, adhoc algorithms are used to suppress subterms on output. As a result, terms might be printed identically although they di#er in hidden parts. Such ambiguous representations may confuse users. Additionally, terms might be rejected by the type checker because the printer has erased too much type information. This paper addresses these problems by proposing e#ective erasure methods that guarantee successful term reconstruction, similar to the ones developed for the compression of proofterms in ProofCarrying Code environments. Experiences with the implementation in Typelab proved them both e#cient and practical. 1 Implicit Syntax Type theories are powerful formal systems that capture both the notion of computation and deduction. Particularly the expressive theories, such as the Calculus of Constructions (CC) [CH88] which is investigated in this paper, are used for the development of mathematical and algorithmic theories since proofs and specifications are representable in a very direct way using one uniform language. There is a price to pay for this expressiveness: abstractions have to be decorated with annotations, and type applications have to be written explicitly, because type abstraction and type application are just special cases of #abstraction and application. For example, to form a list one has to provide the element type as an additional argument to instantiate the polymorphic constructors cons and nil as in (cons IN1(nil IN))). Also, one has to annotate the abstraction of n in #n:IN .n+1 with its type IN although this type is determined by the abstraction body. These excessive ...
Extracting a normalization algorithm in Isabelle/HOL
 TYPES FOR PROOFS AND PROGRAMS, INTERNATIONAL WORKSHOP, TYPES 2004, JOUYENJOSAS
, 2004
"... We present a formalization of a constructive proof of weak normalization for the simplytyped λcalculus in the theorem prover Isabelle/HOL, and show how a program can be extracted from it. Unlike many other proofs of weak normalization based on Tait’s strong computability predicates, which require ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We present a formalization of a constructive proof of weak normalization for the simplytyped λcalculus in the theorem prover Isabelle/HOL, and show how a program can be extracted from it. Unlike many other proofs of weak normalization based on Tait’s strong computability predicates, which require a logic supporting strong eliminations and can give rise to dependent types in the extracted program, our formalization requires only relatively simple proof principles. Thus, the program obtained from this proof is typable in simplytyped higherorder logic as implemented in Isabelle/HOL, and a proof of its correctness can automatically be derived within the system.
Mathematical Knowledge Management in HELM
 Annals of Mathematics and Artificial Intelligence, Special Issue on Mathematical Knowledge Management
, 2001
"... The paper describes the general philosophy and the main architectural and technological solutions of the HELM Project for the management of large repositories of mathematical knowledge. The laitmotif is the extensive use of XMLtechnology, and the exploitation of information in the "Web way", that ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The paper describes the general philosophy and the main architectural and technological solutions of the HELM Project for the management of large repositories of mathematical knowledge. The laitmotif is the extensive use of XMLtechnology, and the exploitation of information in the "Web way", that is without central authority, with few basic rules, in a scalable, adaptable, and extensible manner.