Results 1  10
of
11
Incremental contextdependent analysis for languagebased editors
 ACM Transactions on Programming Languages and Systems
, 1983
"... Knowledge of a programming language's grammar allows languagebased editors to enforce syntactic correctness at all times during development by restricting editing operations to legitimate modifications ot ~ the program's contextfree derivation tree; however, not all language constraints ..."
Abstract

Cited by 81 (10 self)
 Add to MetaCart
Knowledge of a programming language's grammar allows languagebased editors to enforce syntactic correctness at all times during development by restricting editing operations to legitimate modifications ot ~ the program's contextfree derivation tree; however, not all language constraints can be enforced in this way because not all features can be described by the contextfree formalism. Attribute grammars permit contextdependent language features to be expressed in a modular, declarative fashion and thus are a good basis for specifying languagebased editors. Such editors represent programs as attributed trees, Which are modified by operations such as subtree pruning and grafting. Incremental analysis is performed by updating attribute values after every modification. This paper discusses how updating can be carried out and presents several algorithms for the task, including one that is asymptotically optimal in time.
Program improvement by sourcetosource transformation
 J. ACM
, 1977
"... ABSTRACT The use of sourcetosource program transformations has proved valuable in improving program performance The concept of program mampulatlon is elucidated by describing its role in both conventional optnmJzatmn and high level modification of condltnonal, looping, and procedure structures An ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
ABSTRACT The use of sourcetosource program transformations has proved valuable in improving program performance The concept of program mampulatlon is elucidated by describing its role in both conventional optnmJzatmn and high level modification of condltnonal, looping, and procedure structures An example program fragment written in an Algollike language is greatly improved by transformations enabled by a userprovided assertion about a data array A compllatnon model based on the use of sourcetosource program transformations is used to provide a framework for discussing nssues of code generatnon, compllatnon of high level languages such as APL, and ehmlnatmg overhead commonly associated with modular structured programming Application of the compilation model to several different languages is discussed
Proving the Correctness of Compiler Optimisations Based on a Global Analysis: A Study of Strictness Analysis
, 1992
"... A substantial amount of work has been devoted to the proof of correctness of various program analyses but much less attention has been paid to the correctness of compiler optimisations based on these analyses. In this paper we tackle the problem in the context of strictness analysis for lazy functio ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
A substantial amount of work has been devoted to the proof of correctness of various program analyses but much less attention has been paid to the correctness of compiler optimisations based on these analyses. In this paper we tackle the problem in the context of strictness analysis for lazy functional languages. We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations made possible by a strictness analysis.
DependenceBased Representations for Programs with Reference Variables
, 1991
"... Three features common to modem programming languages are popular because they simplify the development of efficient programs. The first, the assignment statement, allows the components of a data structure to be redefined as a computation progresses. The second, dynamic allocation, allows memory for ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Three features common to modem programming languages are popular because they simplify the development of efficient programs. The first, the assignment statement, allows the components of a data structure to be redefined as a computation progresses. The second, dynamic allocation, allows memory for data structures to be acquixed, destroyed, and reused at need. The third, the reference (i.e., pointer) variable, allows multiple data structures to share a common substructure. These three features, unfortunately, make it difficult to estimate program behavior at compiletime. Such estimates play a crucial role in the (automatic) improvement, modification, and reuse of existing software. The first part of this thesis develops...
Semantic Refinement Of Concurrent Object Systems Based On Serializability
 OBJECT ORIENTATION WITH PARALLELISM AND PERSISTENCE
, 1996
"... ..."
Proving the Correctness of Compiler Optimisations Based on Strictness Analysis
 in Proceedings 5th int. Symp. on Programming Language Implementation and Logic Programming, LNCS 714
, 1993
"... . We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisatio ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
. We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations made possible by a strictness analysis. 1 Introduction Realistic compilers for imperative or functional languages include a number of optimisations based on nontrivial global analyses. Proving the correctness of such optimising compilers can be done in three steps: 1. proving the correctness of the original (unoptimised) compiler; 2. proving the correctness of the analysis; and 3. proving the correctness of the modifications of the simpleminded compiler to exploit the results of the analysis. A substantial amount of work has been devoted to steps (1) and (2) but there have been surprisingly few attempts at tackling step (3). In this paper we show how to carry out this third step in the...
CpsTranslation and the Correctness of Optimising Compilers
, 1992
"... We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations made possible by a strictness analysis. These benefits are especially significant in the presence of partially evaluated data structures. 1 Introduction Realistic compilers for imperative or functional languages include a number of optimisations based on nontrivial global analyses. Proving the correctness of such optimising compilers should involve three steps: 1. proving the correctness of the original (unoptimised) compiler; 2. proving the correctness of the analysis; and 3. proving the correctness of the modifications of the simpleminded compiler to exploit the results of the analysis. A substantial amount of work has been devoted to steps (1) and (2) but there has been surprisingly ...
and
"... Denoting a version of Hoare's system for proving partial correctness of recursive programs by H, Ns present an extension JD Nhich may be thought of as H u {^,v,],Y} u H1, including the rules H, four special purpose rules and inverse rules to those of Hoare. D is shown to be a complete system ( ..."
Abstract
 Add to MetaCart
Denoting a version of Hoare's system for proving partial correctness of recursive programs by H, Ns present an extension JD Nhich may be thought of as H u {^,v,],Y} u H1, including the rules H, four special purpose rules and inverse rules to those of Hoare. D is shown to be a complete system (in Cook's sense) for proving deductions of the form e 1,.... r n ~ # over a language, the wff*n of which are assertions in some assertion language L and partial correctness specifications of the form p{=lq. All valid formulae of L are taken as axioms of D. It is shown that D is sufficient for proving partial correctness, total correctness and program equivalence as Nell as other Important properties of programs, the proofs of which are
Towards an Automatic Derivation of Tarjan’s Algorithm for Detecting Strongly Connected Components in Directed Graphs
, 2006
"... Ideally, algorithms should be easy to understand and perform efficiently. However, these two requirements are often contradicting. In this thesis, by describing a semiautomatic derivation of an efficient algorithm for detecting strongly connected components, we argue that efficiency may be derivabl ..."
Abstract
 Add to MetaCart
Ideally, algorithms should be easy to understand and perform efficiently. However, these two requirements are often contradicting. In this thesis, by describing a semiautomatic derivation of an efficient algorithm for detecting strongly connected components, we argue that efficiency may be derivable, thereby satisfying both requirements. First, some basic graph theory will be reviewed. Then we will focus on some existing algorithms, among which Tarjan’s algorithm is the most well known. Some attention is given to parallel algorithms for detecting strongly connected components. Next, we start with a simple but inefficient algorithm for detecting strongly connected components. This algorithm will be transformed stepbystep into a more efficient algorithm. Finally, we will present some test results and compare the efficiency of the resulting algorithm to Tarjan’s algorithm.
unknown title
"... Programmers frequently face the problem of integrating several variants of a base program. Semanticsbased program integration is a technique that attempts to create an integrated program that incorporates the changed computations of the variants as well as the computations of the base program that a ..."
Abstract
 Add to MetaCart
Programmers frequently face the problem of integrating several variants of a base program. Semanticsbased program integration is a technique that attempts to create an integrated program that incorporates the changed computations of the variants as well as the computations of the base program that are preserved in all variants. Horwitz, Prins, and Reps were the first to address the problem of semanticsbased program integration. They presented an integration algorithm that creates the integrated program by merging certain program slices of the variants. Our study provides semantic foundations for their approach: we show that the integrated program produced by their algorithm includes all required computations. We also develop a new programintegration algorithm with the same semantic properties. In addition, the new integration algorithm has two significant characteristics: (1) it is extendible in that it can incorporate any techniques for detecting program components with equivalent behaviors and (2) it can accommodate semanticspreserving transformations. The new integration algorithm improves on the integration algorithm of Horwitz, Prins, and Reps in that there are classes of program modifications for which their algorithm reports interference while the new integration algorithm produces satisfactory integrated programs. One fundamental problem in program integration is to detect program components with equivalent