Results 1  10
of
30
Lazy Code Motion
, 1992
"... We present a bitvector algorithm for the optimal and economical placement of computations within flow graphs, which is as efficient as standard unidirectional analyses. The point of our algorithm is the decomposition of the bidirectional structure of the known placement algorithms into a sequenc ..."
Abstract

Cited by 158 (20 self)
 Add to MetaCart
We present a bitvector algorithm for the optimal and economical placement of computations within flow graphs, which is as efficient as standard unidirectional analyses. The point of our algorithm is the decomposition of the bidirectional structure of the known placement algorithms into a sequence of a backward and a forward analysis, which directly implies the efficiency result. Moreover, the new compositional structure opens the algorithm for modification: two further unidirectional analysis components exclude any unnecessary code motion. This laziness of our algorithm minimizes the register pressure, which has drastic effects on the runtime behaviour of the optimized programs in practice, where an economical use of registers is essential.
Optimal Code Motion: Theory and Practice
, 1993
"... An implementation oriented algorithm for lazy code motion is presented that minimizes the number of computations in programs while suppressing any unnecessary code motion in order to avoid superfluous register pressure. In particular, this variant of the original algorithm for lazy code motion works ..."
Abstract

Cited by 111 (18 self)
 Add to MetaCart
An implementation oriented algorithm for lazy code motion is presented that minimizes the number of computations in programs while suppressing any unnecessary code motion in order to avoid superfluous register pressure. In particular, this variant of the original algorithm for lazy code motion works on flowgraphs whose nodes are basic blocks rather than single statements, as this format is standard in optimizing compilers. The theoretical foundations of the modified algorithm are given in the first part, where trefined flowgraphs are introduced for simplifying the treatment of flowgraphs whose nodes are basic blocks. The second part presents the `basic block' algorithm in standard notation, and gives directions for its implementation in standard compiler environments. Keywords Elimination of partial redundancies, code motion, data flow analysis (bitvector, unidirectional, bidirectional), nondeterministic flowgraphs, trefined flow graphs, critical edges, lifetimes of registers, com...
Partial Redundancy Elimination in SSA Form
 ACM Transactions on Programming Languages and Systems
, 1999
"... This paper presents a new approach called SSAPRE [Chow et al. 1997] that shares the optimality properties of the best prior work [Knoop et al. 1992; Knoop et al. 1994; Drechsler and Stadel 1993] and that is based on static single assignment form. Static single assignment form (SSA) is a popular prog ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a new approach called SSAPRE [Chow et al. 1997] that shares the optimality properties of the best prior work [Knoop et al. 1992; Knoop et al. 1994; Drechsler and Stadel 1993] and that is based on static single assignment form. Static single assignment form (SSA) is a popular program representation in modern optimizing compilers. Its versatility stems from the fact that, in addition to representing the program, it provides accurate usedefinition (usedef) relationships among the program variables in a concise form [Cytron et al. 1991; Wolfe 1996; Chow et al. 1996]. Many efficient global optimization algorithms have been developed based on SSA. Among these optimizations are dead store elimination [Cytron et al. 1991], constant propagation [Wegman and Zadeck 1991], value numbering [Alpern et al. 1988; Rosen et al. 1988; Briggs et al. 1997], induction variable analysis [Gerlek et al. 1995; Liu et al. 1996], live range computation [Gerlek et al. 1994] and global code motion [Click 1995]. Until recently, most uses of SSA have been restricted to solving problems based essentially on program variables. SSA could not readily be applied to solving expressionbased problems because the concept of usedef for expressions is less obvious than for variables. This difficulty was mentioned by Dhamdhere et al. in the conclusion of [Dhamdhere et al. 1992]. They state, essentially, that there is no clear connection between the usedef information for variables represented by SSA form and the redundancy properties for expressions. By demonstrating such a connection and exploiting it, our work shows that an SSAbased approach to PRE and other expressionbased problems is not only plausible, but also enlightening and practical. Although this paper addresses only the PRE ...
Generating Data Flow Analysis Algorithms from Modal Specifications
 SCIENCE OF COMPUTER PROGRAMMING
, 1993
"... The paper develops a framework that is based on the idea that modal logic provides an appropriate framework for the specification of data flow analysis (DFA) algorithms as soon as programs are represented as models of the logic. This can be exploited to construct a DFAgenerator that generates effic ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
The paper develops a framework that is based on the idea that modal logic provides an appropriate framework for the specification of data flow analysis (DFA) algorithms as soon as programs are represented as models of the logic. This can be exploited to construct a DFAgenerator that generates efficient implementations of DFAalgorithms from modal specifications by partially evaluating a specific model checker with respect to the specifying modal formula. Moreover, the use of a modal logic as specification language for DFAalgorithms supports the compositional development of specifications and structured proofs of properties of DFAalgorithms.  The framework is illustrated by means of a real life example: the problem of determining optimal computation points within flow graphs.
A Generalized Theory of Bit Vector Data Flow Analysis
 ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1994
"... ..."
(Show Context)
Lazy Strength Reduction
 Journal of Programming Languages
"... We present a bitvector algorithm that uniformly combines code motion and strength reduction, avoids superfluous register pressure due to unnecessary code motion, and is as efficient as standard unidirectional analyses. The point of this algorithm is to combine the concept of lazy code motion of [1] ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
(Show Context)
We present a bitvector algorithm that uniformly combines code motion and strength reduction, avoids superfluous register pressure due to unnecessary code motion, and is as efficient as standard unidirectional analyses. The point of this algorithm is to combine the concept of lazy code motion of [1] with the concept of unifying code motion and strength reduction of [2, 3, 4, 5]. This results in an algorithm for lazy strength reduction, which consists of a sequence of unidirectional analyses, and is unique in its transformational power. Keywords: Data flow analysis, program optimization, partial redundancy elimination, code motion, strength reduction, bitvector data flow analyses. 1 Motivation Code motion improves the runtime efficiency of a program by avoiding unnecessary recomputations of a value at runtime. Strength reduction improves runtime efficiency by reducing "expensive" recomputations to less expensive ones, e.g., by reducing computations involving multiplication to computat...
GiveNTake  A Balanced Code Placement Framework
 IN PROCEEDINGS OF THE SIGPLAN '94 CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION
, 1994
"... GIVENTAKE is a code placement framework which uses a general producerconsumer concept. An advantage of GIVENTAKE over existing partial redundancy elimination techniques is its concept of production regions, instead of single locations, which can be beneficial for general latency hiding. GIVEN ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
GIVENTAKE is a code placement framework which uses a general producerconsumer concept. An advantage of GIVENTAKE over existing partial redundancy elimination techniques is its concept of production regions, instead of single locations, which can be beneficial for general latency hiding. GIVENTaKE guarantees balanced production, that is, each production will be started and stopped once. The framework can also take advantage of production coming "for free," as induced by side effects, without disturbing balance. GIVENTAKE can place production either before or after consumption, and it also provides the option to hoist code out of potentially zerotrip loop (nest) con structs. GIVENTAKE uses a fast elimination method based on Tarjan intervals, with a complexity linear in the program size in most cases. We have
Code Motion and Code Placement: Just Synonyms?
, 1997
"... We prove that there is no difference between code motion (CM ) and code placement (CP) in the traditional syntactic setting, however, a dramatic difference in the semantic setting. We demonstrate this by reinvestigating semantic CM under the perspective of the recent development of syntactic CM. B ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
We prove that there is no difference between code motion (CM ) and code placement (CP) in the traditional syntactic setting, however, a dramatic difference in the semantic setting. We demonstrate this by reinvestigating semantic CM under the perspective of the recent development of syntactic CM. Besides clarifying and highlightening the analogies and essential differences between the syntactic and the semantic approach, this leads as a sideeffect to a drastical reduction of the conceptual complexity of the valueflow based procedure for semantic CM of [28], as the original bidirectional analysis is decomposed into purely unidirectional components. On the theoretical side, this establishes a natural semantical understanding in terms of the Herbrand interpretation (transparent equivalence), and thus eases the proof of correctness; moreover, it shows the frontier of semantic CM, and gives reason for the lack of algorithms going beyond. On the practical side, it simplifies the implement...