Results 1 
7 of
7
Code Motion and Code Placement: Just Synonyms?
, 1997
"... We prove that there is no difference between code motion (CM ) and code placement (CP) in the traditional syntactic setting, however, a dramatic difference in the semantic setting. We demonstrate this by reinvestigating semantic CM under the perspective of the recent development of syntactic CM. B ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We prove that there is no difference between code motion (CM ) and code placement (CP) in the traditional syntactic setting, however, a dramatic difference in the semantic setting. We demonstrate this by reinvestigating semantic CM under the perspective of the recent development of syntactic CM. Besides clarifying and highlightening the analogies and essential differences between the syntactic and the semantic approach, this leads as a sideeffect to a drastical reduction of the conceptual complexity of the valueflow based procedure for semantic CM of [28], as the original bidirectional analysis is decomposed into purely unidirectional components. On the theoretical side, this establishes a natural semantical understanding in terms of the Herbrand interpretation (transparent equivalence), and thus eases the proof of correctness; moreover, it shows the frontier of semantic CM, and gives reason for the lack of algorithms going beyond. On the practical side, it simplifies the implement...
Principled Strength Reduction
 Algorithmic Languages and Calculi
, 1996
"... This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Thre ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
This paper presents a principled approach for optimizing iterative (or recursive) programs. The approach formulates a loop body as a function f and a change operation \Phi, incrementalizes f with respect to \Phi, and adopts an incrementalized loop body to form a new loop that is more efficient. Three general optimizations are performed as part of the adoption; they systematically handle initializations, termination conditions, and final return values on exits of loops. These optimizations are either omitted, or done in implicit, limited, or ad hoc ways in previous methods. The new approach generalizes classical loop optimization techniques, notably strength reduction, in optimizing compilers, and it unifies and systematizes various optimization strategies in transformational programming. Such principled strength reduction performs drastic program efficiency improvement via incrementalization and appreciably reduces code size via associated optimizations. We give examples where this app...
Interprocedural Herbrand Equalities
 In 14th European Symp. on Programming (ESOP
, 2005
"... Abstract. We present an aggressive interprocedural analysis for inferring value equalities which are independent of the concrete interpretation of the operator symbols. These equalities, called Herbrand equalities, are therefore an ideal basis for truly machineindependent optimizations as they hold ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract. We present an aggressive interprocedural analysis for inferring value equalities which are independent of the concrete interpretation of the operator symbols. These equalities, called Herbrand equalities, are therefore an ideal basis for truly machineindependent optimizations as they hold on every machine. Besides a general correctness theorem, covering arbitrary callbyvalue parameters and local and global variables, we also obtain two new completeness results: one by constraining the analysis problem to Herbrand constants, and one by allowing sideeffectfree functions only. Thus if we miss a constant/equality in these two scenarios, then there exists a separating interpretation of the operator symbols. 1
CostOptimal Code Motion
 ACM Transactions on Programming Languages and Systems
, 1998
"... this article. As Figure 2 illustrated, our transformation consists of replacing each instance of the candidate expression (a b) by a new variable (h), and inserting computations that in their respective contexts have the same e#ect as an assignment of the candidate expression to the variable. In ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this article. As Figure 2 illustrated, our transformation consists of replacing each instance of the candidate expression (a b) by a new variable (h), and inserting computations that in their respective contexts have the same e#ect as an assignment of the candidate expression to the variable. In order to keep our framework general, we will concern ourselves only with selecting the insertion points, and not with the question of which computation should be inserted at each selected point. The latter is instead determined by the particular instantiation of our framework, which must provide a function from nodes to statements, computation , such that computation(n) is the statement to insert at node n if node n is chosen as an insertion point. This function is as usual implicitly parameterized by the candidate expression and flow graph. For example, given the flowgraph of Figure 1 and the candidate expression a b, our example instantiation of the transformation framework would have computation(5) equal to h 6. (We will later show how this example computation function is defined.) Although the selection of specific computations to insert is outside the scope of our framework, our transformation framework needs to know something about the computations if it is to select insertion points wisely. Namely, it needs to know how expensive a computation would be inserted at each point, were that point selected. Therefore, our framework is instantiated not only with a computation function, but also with a cost function that specifies a numerical cost for each of the computations. That is, for a node n, the cost of computing computation(n) at node n is given by cost(n). We will take this cost function to be another implicit parameter, treating it as a fixed, given function ...
Bilateral Algorithms for Symbolic Abstraction
"... Abstract. Given a concrete domain C, a concrete operation τ: C → C, and an abstract domain A, a fundamental problem in abstract interpretation is to find the best abstract transformer τ # : A → A that overapproximates τ. This problem, as well as several other operations needed by an abstract interpr ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. Given a concrete domain C, a concrete operation τ: C → C, and an abstract domain A, a fundamental problem in abstract interpretation is to find the best abstract transformer τ # : A → A that overapproximates τ. This problem, as well as several other operations needed by an abstract interpreter, can be reduced to the problem of symbolic abstraction: the symbolic abstraction of a formula ϕ in logic L, denoted by ̂α(ϕ), is the best value in A that overapproximates the meaning of ϕ. When the concrete semantics of τ is defined in L using a formula ϕτ that transformer τ # can be computed as ̂α(ϕτ). In this paper, we present a new framework for performing symbolic abstraction, discuss its properties, and present several instantiations for various logics and abstract domains. The key innovation is to use a bilateral successiveapproximation algorithm, which maintains both an overapproximation and an underapproximation of the desired answer. The advantage of having a nontrivial overapproximation is that it makes the technique resilient to timeouts. 1
A lifetime optimal algorithm for speculative PRE
 ACM Transactions on Architecture and Code Optimization
"... A lifetime optimal algorithm, called MCPRE, is presented for the first time that performs speculative PRE based on edge profiles. In addition to being computationally optimal in the sense that the total number of dynamic computations for an expression in the transformed code is minimized, MCPRE is ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A lifetime optimal algorithm, called MCPRE, is presented for the first time that performs speculative PRE based on edge profiles. In addition to being computationally optimal in the sense that the total number of dynamic computations for an expression in the transformed code is minimized, MCPRE is also lifetime optimal since the lifetimes of introduced temporaries are also minimized. The key in achieving lifetime optimality lies not only in finding a unique minimum cut on a transformed graph of a given CFG but also in performing a dataflow analysis directly on the CFG to avoid making unnecessary code insertions and deletions. The lifetime optimal results are rigorously proved. We evaluate our algorithm in GCC against three previously published PRE algorithms, namely, MCPREcomp (Qiong and Xue’s computationally optimal version of MCPRE), LCM (Knoop, Rüthing and Steffen’s lifetime optimal algorithm for performing nonspeculative PRE) and CMPPRE (Bodik, Gupta and Soffa’s PRE algorithm based on codemotion preventing (CMP) regions, which is speculative but not computationally optimal). We report and analyze our experimental results, obtained from both actual program execution and instrumentation, for all 22 C, C++ and FORTRAN 77 benchmarks from SPECcpu2000 on an Itanium 2 computer system. Our results show that MCPRE (or MCPREcomp) is capable of eliminating more partial redundancies than both LCM and CMPPRE (especially in functions with complex control flow), and in addition, MCPRE inserts temporaries with shorter lifetimes than MCPREcomp. Each of both benefits has contributed to the performance improvements in benchmark programs at the costs of only small compiletime and codesize increases in some benchmarks.
Incremental computation for transformational software development
"... Given a program f and an input change, w e wish to obtain an incremental program that computes f (x y) e ciently by making use of the value of f (x), the intermediate results computed in computing f (x), and auxiliary information about x that can be inexpensively maintained. Obtaining such increment ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Given a program f and an input change, w e wish to obtain an incremental program that computes f (x y) e ciently by making use of the value of f (x), the intermediate results computed in computing f (x), and auxiliary information about x that can be inexpensively maintained. Obtaining such incremental programs is an essential part of the transformationalprogramming approach to software development and enhancement. This paper presents a systematic approach that discovers a general class of useful auxiliary information, combines it with useful intermediate results, and obtains an e cient incremental program that uses and maintains these intermediate results and auxiliary information. We g i v e a n umbe r of examples from list processing, VLSI circuit design, image processing, etc. 1