Results 1  10
of
15
Formal certification of a compiler backend, or: programming a compiler with a proof assistant
 IN PROC. 33RD ACM SYMPOSIUM ON PRINCIPLES OF PROGRAMMING LANGUAGES (POPL ’06
, 2006
"... This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a Clike imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compile ..."
Abstract

Cited by 229 (15 self)
 Add to MetaCart
This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a Clike imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compiler is useful in the context of formal methods applied to the certification of critical software: the certification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.
Prooftransforming compilation of programs with abrupt termination
 In Sixth International Workshop on Specification and Verification of ComponentBased Systems (SAVCBS 2007
, 2007
"... The execution of untrusted bytecode programs can produce undesired behavior. A proof on the bytecode programs can be generated to ensure safe execution. Automatic techniques to generate proofs, such as certifying compilation, can only be used for a restricted set of properties such as type safety. I ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
The execution of untrusted bytecode programs can produce undesired behavior. A proof on the bytecode programs can be generated to ensure safe execution. Automatic techniques to generate proofs, such as certifying compilation, can only be used for a restricted set of properties such as type safety. Interactive verification of bytecode is difficult due to its unstructured control flow. Our approach is verify programs on the source level and then translate the proof to the bytecode level. This translation is nontrivial for programs with abrupt termination. We present proof transforming compilation from Java to Java Bytecode. This paper formalizes
Using Program Checking to Ensure the Correctness of Compiler Implementations
 Journal of Universal Computer Science (J.UCS
, 2003
"... Abstract: We evaluate the use of program checking to ensure the correctness of compiler implementations. Our contributions in this paper are threefold: Firstly, we extend the classical notion of blackbox program checking to program checking with certificates. Our checking approach with certificates ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract: We evaluate the use of program checking to ensure the correctness of compiler implementations. Our contributions in this paper are threefold: Firstly, we extend the classical notion of blackbox program checking to program checking with certificates. Our checking approach with certificates relies on the observation that the correctness of solutions of NPcomplete problems can be checked in polynomial time whereas their computation itself is believed to be much harder. Our second contribution is the application of program checking with certificates to optimizing compiler backends, in particular code generators, thus answering the open question of how program checking for such compiler backends can be achieved. In particular, we state a checking algorithm for code generation based on bottomup rewrite systems from static single assignment representations. We have implemented this algorithm in a checker for a code generator used in an industrial project. Our last contribution in this paper is an integrated view on all compiler passes, in particular a comparison between frontend and backend phases, with respect to the applicable methods of program checking.
Verified Code Generation for Embedded Systems
, 2002
"... Digital signal processors provide specialized SIMD (single instruction multiple data) operations designed to dramatically increase performance in embedded systems. While these operations are simple to understand, their unusual functions and their parallelism make it difficult for automatic code gene ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Digital signal processors provide specialized SIMD (single instruction multiple data) operations designed to dramatically increase performance in embedded systems. While these operations are simple to understand, their unusual functions and their parallelism make it difficult for automatic code generation algorithms to use them effectively. In this paper, we present a new optimizing code generation method that can deploy these operations successfully while also verifying that the generated code is a correct translation of the input program.
Verified Bytecode Verification and TypeCertifying Compilation
 JOURNAL OF LOGIC AND ALGEBRAIC PROGRAMMING
, 2003
"... This article presents a type certifying compiler for a subset of Java and proves the type correctness of the bytecode it generates in the proof assistant Isabelle. The proof is performed by defining a type compiler that emits a type certificate and by showing a correspondence between bytecode and th ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This article presents a type certifying compiler for a subset of Java and proves the type correctness of the bytecode it generates in the proof assistant Isabelle. The proof is performed by defining a type compiler that emits a type certificate and by showing a correspondence between bytecode and the certificate which entails welltyping. The basis for this work is an extensive formalization of the Java bytecode type system, which is first presented in an abstract, latticetheoretic setting and then instantiated to Java types.
Structuring optimizing transformations and proving them sound
 In Proceedings of COCV’06
, 2006
"... A compiler optimization is sound if the optimized program that it produces is semantically equivalent to the input program. The proofs of semantic equivalence are usually tedious. To reduce the efforts required, we identify a set of common transformation primitives that can be composed sequentially ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
A compiler optimization is sound if the optimized program that it produces is semantically equivalent to the input program. The proofs of semantic equivalence are usually tedious. To reduce the efforts required, we identify a set of common transformation primitives that can be composed sequentially to obtain specifications of optimizing transformations. We also identify the conditions under which the transformation primitives preserve semantics and prove their sufficiency. Consequently, proving the soundness of an optimization reduces to showing that the soundness conditions of the underlying transformation primitives are satisfied. The program analysis required for optimization is defined over the input program whereas the soundness conditions of a transformation primitive need to be shown on the version of the program on which it is applied. We express both in a temporal logic. We also develop a logic called temporal transformation logic to correlate temporal properties over a program (seen as a Kripke structure) and its transformation. An interesting possibility created by this approach is a novel scheme for validating optimizer implementations. An optimizer can be instrumented to generate a trace of its transformations in terms of the transformation primitives. Conformance of the trace with the optimizer can be checked through simulation. If soundness conditions of the underlying primitives are satisfied by the trace then it preserves semantics.
An Automatic Verification Technique for Loop and Data Reuse Transformations based on Geometric Modeling of Programs
, 2003
"... Optimizing programs by applying sourcetosource transformations is a prevalent practice among programmers. Particularly so, in the methodical embedded systems design frameworks, where the initial program is subject to a series of transformations to optimize computation and communication. In the con ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Optimizing programs by applying sourcetosource transformations is a prevalent practice among programmers. Particularly so, in the methodical embedded systems design frameworks, where the initial program is subject to a series of transformations to optimize computation and communication. In the context of parallelization and custom memory design, such transformations are applied on the loop structures and index expressions of array variables in the program, more often manually than with a tool, leading to the nontrivial problem of checking their correctness. Applied transformations are semantics preserving if the transformed program is functionally equivalent to the initial program from the inputoutput point of view. In this work we present an automatic technique based on geometrical program modeling to formally check the functional equivalence of initial and transformed programs under loop and data reuse transformations. The verification is transformation oblivious needing no information either about the particular transformations that have been applied or the order in which they have been applied. Our technique also provides very useful diagnostics to locate the detected errors.
A Mechanically Verified Compiling Specification for a Realistic Compiler
, 2002
"... We report on a large formal verification effort in mechanically proving correct a compiling specification for a realistic bootstrap compiler from ComLisp (a subset of ANSI Common Lisp sufficiently expressive to serve as a compiler implementation language) to binary Transputer code using the PVS syst ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We report on a large formal verification effort in mechanically proving correct a compiling specification for a realistic bootstrap compiler from ComLisp (a subset of ANSI Common Lisp sufficiently expressive to serve as a compiler implementation language) to binary Transputer code using the PVS system. The compilation is carried out in five steps through a series of intermediate languages. In the first phase, ComLisp is translated into a stack intermediate language (SIL), where parameter passing is implemented by a stack technique. Expressions are transformed from a prefix notation into a postfix notation according to the stack principle. SIL is then compiled into C int where the ComLisp data structures (sexpressions) and operators are implemented in linear integer memory using a runtime stack and a heap. These two steps are machine independent. In the compiler’s backend, first control structures (loops, conditionals) of the intermediate language C int are implemented by linear assembler code with relative jumps, the infinite memory model of C int is realized on the finite Transputer memory, and the basic C int statements for accessing the stack and heap are implemented by a sequence of assembler instructions. The fourth phase consists of the implementation of
An ASM Semantics for SSA Intermediate Representations
 In: Proc. 11th Int’l Workshop on Abstract State Machines
, 2004
"... Abstract. Static single assignment (SSA) form is the intermediate representation of choice in modern optimizing compilers for which no formal semantics has been stated yet. To prove such compilers correct, a formal semantics of SSA representations is necessary. In this paper, we show that abstract s ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. Static single assignment (SSA) form is the intermediate representation of choice in modern optimizing compilers for which no formal semantics has been stated yet. To prove such compilers correct, a formal semantics of SSA representations is necessary. In this paper, we show that abstract state machines (ASMs) are able to capture the imperative as well as the data flowdriven and therefore nondeterministic aspects of SSA representations in a simple and elegant way. Furthermore, we demonstrate that correctness of code generation can be verified based on this ASM semantics by proving the correctness of a simple code generation algorithm. 1
ASMs versus Natural Semantics: A Comparison with New Insights
 IN ABSTRACT STATE MACHINES  ADVANCES IN THEORY AND APPLICATIONS, PROC. 10TH INT’L WORKSHOP, ASM 2003
, 2003
"... We compare three specification frameworks for the operational semantics of programming languages, abstract state machines (ASMs) and the two incarnations of natural semantics, bigstep and smallstep semantics, with respect to two criteria: the range of imperative programming languages to which they ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We compare three specification frameworks for the operational semantics of programming languages, abstract state machines (ASMs) and the two incarnations of natural semantics, bigstep and smallstep semantics, with respect to two criteria: the range of imperative programming languages to which they are applicable and the way the program is used in the specifications and treated during the thereby defined execution. To reveal the fundamental differences between these three mechanisms, we investigate if there are automatic transformations between them. As a side effect, this leads to new insights concerning the classification of bigstep and smallstep semantics.