Results 1  10
of
22
Simple Relational Correctness Proofs for Static Analyses and Program Transformations
, 2004
"... We show how some classical static analyses for imperative programs, and the optimizing transformations which they enable, may be expressed and proved correct using elementary logical and denotational techniques. The key ingredients are an interpretation of program properties as relations, rather tha ..."
Abstract

Cited by 88 (9 self)
 Add to MetaCart
(Show Context)
We show how some classical static analyses for imperative programs, and the optimizing transformations which they enable, may be expressed and proved correct using elementary logical and denotational techniques. The key ingredients are an interpretation of program properties as relations, rather than predicates, and a realization that although many program analyses are traditionally formulated in very intensional terms, the associated transformations are actually enabled by more liberal extensional properties.
The Klaim Project: Theory and Practice
 GLOBAL COMPUTING: PROGRAMMING ENVIRONMENTS, LANGUAGES, SECURITY AND ANALYSIS OF SYSTEMS, VOLUME 2874 OF LNCS
, 2003
"... Klaim (Kernel Language for Agents Interaction and Mobility) is an experimental language specifically designed to program distributed systems consisting of several mobile components that interact through multiple distributed tuple spaces. Klaim primitives allow programmers to distribute and retri ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
Klaim (Kernel Language for Agents Interaction and Mobility) is an experimental language specifically designed to program distributed systems consisting of several mobile components that interact through multiple distributed tuple spaces. Klaim primitives allow programmers to distribute and retrieve data and processes to and from the nodes of a net. Moreover, localities are firstclass citizens that can be dynamically created and communicated over the network. Components, both stationary and mobile, can explicitly refer and control the spatial structures of the network. This paper
TypeBased Useless Variable Elimination
, 1999
"... We show a typebased method for useless variable elimination, i.e., transformation that eliminates variables whose values contribute nothing to the final outcome of a computation, and prove its correctness. The algorithm is a surprisingly simple extension of the usual type reconstruction algorithm. ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We show a typebased method for useless variable elimination, i.e., transformation that eliminates variables whose values contribute nothing to the final outcome of a computation, and prove its correctness. The algorithm is a surprisingly simple extension of the usual type reconstruction algorithm. Our method seems more attractive than Wand and Siveroni's 0CFAbased method in many respects. First, it is efficient: it runs in time almost linear in the size of an input expression for a simplytyped calculus, while the 0CFAbased method may require a cubic time. Second, our transformation can be shown to be optimal among those that preserve welltypedness, both for the simplytyped language and for an MLstyle polymorphicallytyped language. On the other hand, the 0CFAbased method is not optimal for the polymophicallytyped language. ANY OTHER IDENTIFYING INFORMATION OF THIS REPORT Summary has been submitted for publication. Uptodate version of this report will be available through ...
Eliminating dead code on recursive data
 Science of Computer Programming
, 1999
"... Abstract. This paper describes a general and powerful method for dead code analysis and elimination in the presence of recursive data constructions. We represent partially dead recursive data using liveness patterns based on general regular tree grammars extended with the notion of live and dead, an ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Abstract. This paper describes a general and powerful method for dead code analysis and elimination in the presence of recursive data constructions. We represent partially dead recursive data using liveness patterns based on general regular tree grammars extended with the notion of live and dead, and we formulate the analysis as computing liveness patterns at all program points based on program semantics. This analysis yields a most precise liveness pattern for the data at each program point, which is signi cantly more precise than results from previous methods. The analysis algorithm takes cubic time in terms of the size of the program in the worst case but is very e cient in practice, as shown by our prototype implementation. The analysis results are used to identify and eliminate dead code. The general framework for representing and analyzing properties of recursive data structures using general regular tree grammars applies to other analyses as well. 1
Boolean constraints for bindingtime analysis
 In Programs as Data Objects II, number 2053 in Lecture Notes in Computer Science
, 2001
"... ..."
(Show Context)
Removing Redundant Arguments of Functions
 In 9th International Conference on Algebraic Methodology And Software Technology, AMAST 2002, H. Kirchner and C. Ringeissen, Eds. Lecture Notes in Computer Science
, 2002
"... The application of automatic transformation processes during the formal development and optimization of programs can introduce encumbrances in the generated code that programmers usually (or presumably) do not write. An example is the introduction of redundant arguments in the functions defined in t ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The application of automatic transformation processes during the formal development and optimization of programs can introduce encumbrances in the generated code that programmers usually (or presumably) do not write. An example is the introduction of redundant arguments in the functions defined in the program. Redundancy of a parameter means that replacing it by any expression does not change the result. In this work, we provide a method for the analysis and elimination of redundant arguments in term rewriting systems as a model for the programs that can be written in more sophisticated languages.
UselessCode Elimination and Program Slicing for the PiCalculus
, 2003
"... In this paper, we study program transformations called uselesscode elimination and program slicing in the context of the #calculus. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we study program transformations called uselesscode elimination and program slicing in the context of the #calculus.
Relating stepindexed logical relations and bisimulations
, 2009
"... Operational logical relations and bisimulations are two particularly successful syntactic techniques for reasoning about program equivalence. Although both techniques seem to have common intuitions, their basis is on different mathematical principles: induction for the former, and coinduction for t ..."
Abstract
 Add to MetaCart
(Show Context)
Operational logical relations and bisimulations are two particularly successful syntactic techniques for reasoning about program equivalence. Although both techniques seem to have common intuitions, their basis is on different mathematical principles: induction for the former, and coinduction for the latter. The intuitive understanding of the two techniques seems more common, but their mathematical connection more ambitious, when each is combined with stepbased reasoning, such as in the case of AppelMcAllesterAhmed stepindexed (SI) logical relations [5, 4] and KoutavasWand (KW) bisimulations [12, 11]. In this paper we give an alternative formulation of a SI logical relation in the style of AppelMcAllesterAhmed. We derive this from a definition that is parametric on the indexing scheme by requiring it to satisfy the desirable properties of a SI logical relation. We then argue that SI logical relations and KW bisimulations approximate the same relation each in a distinct way. Finally we prove a somewhat surprising commutation theorem between unions and intersections that may be used as a new proof technique. 1
Consultant
, 2004
"... Abstract. For functional programs, unboxing aggregate data structures such as tuples removes memory indirections and frees dead components of the decoupled structures. To explore the consequences of such optimizations in a wholeprogram compiler, this paper presents a tuple flattening transformation ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. For functional programs, unboxing aggregate data structures such as tuples removes memory indirections and frees dead components of the decoupled structures. To explore the consequences of such optimizations in a wholeprogram compiler, this paper presents a tuple flattening transformation and a framework that allows the formal study and comparison of different flattening schemes. We present our transformation over functional SSA, a simplytyped, monomorphic language and show that the transformation is typesafe. The flattening algorithm defined by our transformation has been incorporated into MLton, a wholeprogram, optimizing compiler for SML. Experimental results indicate that aggressive tuple flattening can lead to substantial improvements in runtime performance, a reduction in code size, and a decrease in total allocation without a significant increase in compilation time.
Abstract interpreters for free
, 2010
"... ... semantics bear an uncanny resemblance. In this work, we present an analysisdesign methodology that both explains and exploits that resemblance. Specifically, we present a twostep method to convert a smallstep concrete semantics into a family of sound, computable abstract interpretations. The f ..."
Abstract
 Add to MetaCart
(Show Context)
... semantics bear an uncanny resemblance. In this work, we present an analysisdesign methodology that both explains and exploits that resemblance. Specifically, we present a twostep method to convert a smallstep concrete semantics into a family of sound, computable abstract interpretations. The first step refactors the concrete statespace to eliminate recursive structure; this refactoring of the statespace simultaneously determines a storepassingstyle transformation on the underlying concrete statespace and a Galois connection simultaneously. The Galois connection allows the calculation of the “optimal ” abstract interpretation. The twostep process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow, field and contextsensitivity. Thus, under the method, we can give the emergence of these properties a graphtheoretic characterization. To illustrate the method, we systematically abstract the continuationpassing style lambda calculus to arrive at two distinct families of analyses. The first is the wellknown kCFA family interpretations, none of which appear in the literature on static analysis of higherorder programs.