Results 1  10
of
18
On the Construction of Correct Compiler BackEnds: An ASM Approach
 Journal of Universal Computer Science
, 1997
"... : Existing works on the construction of correct compilers have at least one of the following drawbacks: (i) correct compilers do not compile into machine code of existing processors. Instead they compile into programs of an abstract machine which ignores limitations and properties of reallife proce ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
: Existing works on the construction of correct compilers have at least one of the following drawbacks: (i) correct compilers do not compile into machine code of existing processors. Instead they compile into programs of an abstract machine which ignores limitations and properties of reallife processors. (ii) the code generated by correct compilers is orders of magnitudes slower than the code generated by unverified compilers. (iii) the considered source language is much less complex than reallife programming languages. This paper focuses on the construction of correct compiler backends which generate machinecode for reallife processors from realistic intermediate languages. Our main results are the following: (i) We present a proof approach based on abstract state machines for bottomup rewriting system specifications (BURS) for backend generators. A significant part of this proof can be parametrized with the intermediate and machine language. (ii) The performance of the code con...
Code generation based on formal BURS theory and heuristic search
 Acta Informatica
, 1997
"... BURS theory provides a powerful mechanism to efficiently generate pattern matches in a given expression tree. BURS, which stands for bottomup rewrite system, is based on term rewrite systems, to which costs are added. We formalise the
underlying theory, and derive an algorithm that computes all pat ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
BURS theory provides a powerful mechanism to efficiently generate pattern matches in a given expression tree. BURS, which stands for bottomup rewrite system, is based on term rewrite systems, to which costs are added. We formalise the
underlying theory, and derive an algorithm that computes all pattern matches. This algorithm terminates if the term rewrite system is finite. We couple this algorithm with
the wellknown search algorithm A* that carries out pattern selection. The search algorithm is directed by a cost heuristic that estimates the minimum cost of code that
has yet to be generated. The advantage of using a search algorithm is that we need to compute only those costs that may be part of an optimal rewrite sequence (and not the costs of all possible rewrite sequences as in dynamic programming). A system that implements the algorithms presented in this work has been built.
Graph Rewrite Systems for Program Optimization
, 2000
"... Graph rewrite systems can be used to specify and generate program optimizations. For termination of the systems... ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Graph rewrite systems can be used to specify and generate program optimizations. For termination of the systems...
Using Program Checking to Ensure the Correctness of Compiler Implementations
 Journal of Universal Computer Science (J.UCS
, 2003
"... Abstract: We evaluate the use of program checking to ensure the correctness of compiler implementations. Our contributions in this paper are threefold: Firstly, we extend the classical notion of blackbox program checking to program checking with certificates. Our checking approach with certificates ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Abstract: We evaluate the use of program checking to ensure the correctness of compiler implementations. Our contributions in this paper are threefold: Firstly, we extend the classical notion of blackbox program checking to program checking with certificates. Our checking approach with certificates relies on the observation that the correctness of solutions of NPcomplete problems can be checked in polynomial time whereas their computation itself is believed to be much harder. Our second contribution is the application of program checking with certificates to optimizing compiler backends, in particular code generators, thus answering the open question of how program checking for such compiler backends can be achieved. In particular, we state a checking algorithm for code generation based on bottomup rewrite systems from static single assignment representations. We have implemented this algorithm in a checker for a code generator used in an industrial project. Our last contribution in this paper is an integrated view on all compiler passes, in particular a comparison between frontend and backend phases, with respect to the applicable methods of program checking.
Global Code Selection for Directed Acyclic Graphs
, 1994
"... . We describe a novel technique for code selection based on dataflow graphs, which arise naturally in the domain of digital signal processing. Code selection is the optimized mapping of abstract operations to partial machine instructions. The presented method performs an important task within t ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
. We describe a novel technique for code selection based on dataflow graphs, which arise naturally in the domain of digital signal processing. Code selection is the optimized mapping of abstract operations to partial machine instructions. The presented method performs an important task within the retargetable microcode generator CBC, which was designed to cope with the requirements arising in the context of custom digital signal processor (DSP) programming. The algorithm exploits a graph representation in which controlflow is modeled by scopes. 1 Introduction In the domain of mediumthroughput digital signal processing, microprogrammable processor cores are frequently chosen for system realization. By adding dedicated hardware (accelerator paths), these cores are tailored to the needs of new applications. Optimized processor modules can be reused, which is a major benefit compared to highlevel synthesis [28] where a completely new design is developed for each application. ...
ASMBased Mechanized Verification of Compiler BackEnds
"... We describe an approach to mechanically prove the correctness of BURS specifications and show how such a tool can be connected with BURS based backend generators [9]. The proofs are based on the operational semantics of both source and target system languages specified by means of Abstract Stat ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We describe an approach to mechanically prove the correctness of BURS specifications and show how such a tool can be connected with BURS based backend generators [9]. The proofs are based on the operational semantics of both source and target system languages specified by means of Abstract State Machines [14]. In [27] we decomposed the correctness condition based on these operational semantics into local correctness conditions for each BURS rule and showed that these local correctness conditions can be proven independently. The specification and verification system PVS is used to mechanicaly verify BURSrules based on formal representations of the languages involved. In particular, we have defined PVS proof strategies which enable an automatic verification of the rules. Using PVS, several erroneous rules have been found. Moreover, from failed proof attempts we were able to correct them.
Verified Code Generation for Embedded Systems
, 2002
"... Digital signal processors provide specialized SIMD (single instruction multiple data) operations designed to dramatically increase performance in embedded systems. While these operations are simple to understand, their unusual functions and their parallelism make it difficult for automatic code gene ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Digital signal processors provide specialized SIMD (single instruction multiple data) operations designed to dramatically increase performance in embedded systems. While these operations are simple to understand, their unusual functions and their parallelism make it difficult for automatic code generation algorithms to use them effectively. In this paper, we present a new optimizing code generation method that can deploy these operations successfully while also verifying that the generated code is a correct translation of the input program.
Verifying Compilers and ASMs or ASMs for uniform description of multistep transformations
, 2000
"... A verifying compiler ensures that the compiled code is always correct but the compiler may also terminate with an error mesage and then fails to generate code. We argue that with respect to compiler correctness this is the best possible result which can be achieved in practice. Such a compiler m ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A verifying compiler ensures that the compiled code is always correct but the compiler may also terminate with an error mesage and then fails to generate code. We argue that with respect to compiler correctness this is the best possible result which can be achieved in practice. Such a compiler may even include unverified code provided the results of such code can be proven correct independently from how they are generated. We then show how abstract state machines (ASMs) can be used to uniformly describe the dynamic semantics of the programs being compiled across the various intermediate transformation steps occurring within a compiler. Besides being a convenient tool for describing dynamic semantics the fact that we do not have to switch between di#erent descriptional methods is found to be extremely useful.
Solving Proportional Analogies by E–Generalization
"... Abstract. We present an approach for solving proportional analogies of the form A: B:: C: D where a plausible outcome for D is computed. The core of the approach is E–Generalization. The generalization method is based on the extraction of the greatest common structure of the terms A, B and C and yie ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present an approach for solving proportional analogies of the form A: B:: C: D where a plausible outcome for D is computed. The core of the approach is E–Generalization. The generalization method is based on the extraction of the greatest common structure of the terms A, B and C and yields a mapping to compute every possible value for D with respect to some equational theory. This approach to analogical reasoning is formally sound and powerful and at the same time models crucial aspects of human reasoning, that is the guidance of mapping by shared roles and the use of rerepresentations based on a background theory. The focus of the paper is on the presentation of the approach. It is illustrated by an application for the letter string domain. 1
Weight Computation of Regular Tree Languages
, 2004
"... We present a general framework to define an applicationdependent weight measure on terms that subsumes e.g. total simplification orderings, and an O(n log n) algorithm for the simultaneous computation of the minimal weight of a term in the language of each nonterminal of a regular tree grammar, b ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We present a general framework to define an applicationdependent weight measure on terms that subsumes e.g. total simplification orderings, and an O(n log n) algorithm for the simultaneous computation of the minimal weight of a term in the language of each nonterminal of a regular tree grammar, based on Barzdins' liquidflow technique.