Results 1  10
of
16
Efficiently computing static single assignment form and the control dependence graph
 ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1991
"... In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single ass ..."
Abstract

Cited by 839 (7 self)
 Add to MetaCart
In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow propertiee of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimization. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use dominance frontiers, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Constant propagation with conditional branches
 ACM Transactions on Programming Languages and Systems
, 1991
"... Constant propagation is a wellknown global flow analysis problem. The goal of constant propagation is to discover values that are constant on all possible executions of a program and to propagate these constant values as far forward through the program as possible. Expressions whose operands are al ..."
Abstract

Cited by 309 (1 self)
 Add to MetaCart
Constant propagation is a wellknown global flow analysis problem. The goal of constant propagation is to discover values that are constant on all possible executions of a program and to propagate these constant values as far forward through the program as possible. Expressions whose operands are all constants can be evaluated at compile time and the results propagated further. Using the algorithms presented in this paper can produce smaller and faster compiled programs. The same algorithms can be used for other kinds of analyses (e.g., type determination). We present four algorithms in this paper, all conservative in the sense that all constants may not be found, but each constant found is constant over all possible executions of the program. These algorithms are among the simplest, fastest, and most powerful global constant propagation algorithms known. We also present a new algorithm that performs a form of interprocedural data flow analysis in which aliasing information is gathered in conjunction with constant propagation. Several variants of this algorithm are considered.
Nonlinear Array Dependence Analysis
, 1991
"... Standard array data dependence techniques can only reason about linear constraints. There has also been work on analyzing some dependences involving polynomial constraints. Analyzing array data dependences in realworld programs requires handling many "unanalyzable" terms: subscript arrays, runtime ..."
Abstract

Cited by 74 (6 self)
 Add to MetaCart
Standard array data dependence techniques can only reason about linear constraints. There has also been work on analyzing some dependences involving polynomial constraints. Analyzing array data dependences in realworld programs requires handling many "unanalyzable" terms: subscript arrays, runtime tests, function calls. The standard approach to analyzing such programs has been to omit and ignore any constraints that cannot be reasoned about. This is unsound when reasoning about valuebased dependences and whether privatization is legal. Also, this prevents us from determining the conditions that must be true to disprove the dependence. These conditions could be checked by a runtime test or verified by a programmer or aggressive, demanddriven interprocedural analysis. We describe a solution to these problems. Our solution makes our system sound and more accurate for analyzing valuebased dependences and derives conditions that can be used to disprove dependences. We also give some p...
A New Algorithm for Partial Redundancy Elimination based on SSA Form
, 1997
"... A new algorithm, SSAPRE, for performing partial redundancy elimination based entirely on SSA form is presented. It achieves optimal code motion similar to lazy code motion [KRS94a, DS93], but is formulated independently and does not involve iterative data flow analysis and bit vectors in its solutio ..."
Abstract

Cited by 67 (3 self)
 Add to MetaCart
A new algorithm, SSAPRE, for performing partial redundancy elimination based entirely on SSA form is presented. It achieves optimal code motion similar to lazy code motion [KRS94a, DS93], but is formulated independently and does not involve iterative data flow analysis and bit vectors in its solution. It not only exhibits the characteristics common to other sparse approaches, but also inherits the advantages shared by other SSAbased optimization techniques. SSAPRE also maintains its output in the same SSA form as its input. In describing the algorithm, we state theorems with proofs giving our claims about SSAPRE. We also give additional description about our practical implementation of SSAPRE, and analyze and compare its performance with a bitvectorbased implementation of PRE. We conclude with some discussion of the implications of this work. 1 Introduction The Static Single Assignment Form (SSA) has become a popular program representation in optimizing compilers, because it provid...
Partial Redundancy Elimination in SSA Form
 ACM Transactions on Programming Languages and Systems
, 1999
"... This paper presents a new approach called SSAPRE [Chow et al. 1997] that shares the optimality properties of the best prior work [Knoop et al. 1992; Knoop et al. 1994; Drechsler and Stadel 1993] and that is based on static single assignment form. Static single assignment form (SSA) is a popular prog ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
This paper presents a new approach called SSAPRE [Chow et al. 1997] that shares the optimality properties of the best prior work [Knoop et al. 1992; Knoop et al. 1994; Drechsler and Stadel 1993] and that is based on static single assignment form. Static single assignment form (SSA) is a popular program representation in modern optimizing compilers. Its versatility stems from the fact that, in addition to representing the program, it provides accurate usedefinition (usedef) relationships among the program variables in a concise form [Cytron et al. 1991; Wolfe 1996; Chow et al. 1996]. Many efficient global optimization algorithms have been developed based on SSA. Among these optimizations are dead store elimination [Cytron et al. 1991], constant propagation [Wegman and Zadeck 1991], value numbering [Alpern et al. 1988; Rosen et al. 1988; Briggs et al. 1997], induction variable analysis [Gerlek et al. 1995; Liu et al. 1996], live range computation [Gerlek et al. 1994] and global code motion [Click 1995]. Until recently, most uses of SSA have been restricted to solving problems based essentially on program variables. SSA could not readily be applied to solving expressionbased problems because the concept of usedef for expressions is less obvious than for variables. This difficulty was mentioned by Dhamdhere et al. in the conclusion of [Dhamdhere et al. 1992]. They state, essentially, that there is no clear connection between the usedef information for variables represented by SSA form and the redundancy properties for expressions. By demonstrating such a connection and exploiting it, our work shows that an SSAbased approach to PRE and other expressionbased problems is not only plausible, but also enlightening and practical. Although this paper addresses only the PRE ...
The Java Memory Model is Fatally Flawed
, 2000
"... The Java memory model described in Chapter 17 of the Java Language Specification gives constraints on how threads interact through memory. This chapter is hard to interpret and poorly understood; it imposes constraints that prohibit common compiler optimizations and are expensive to implement on exi ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
The Java memory model described in Chapter 17 of the Java Language Specification gives constraints on how threads interact through memory. This chapter is hard to interpret and poorly understood; it imposes constraints that prohibit common compiler optimizations and are expensive to implement on existing hardware. Most JVMs violate the constraints of the existing Java memory model; conforming to the existing specification would impose significant performance penalties. In addition, programming idioms used by some programmers and used within Sun's Java Development Kit is not guaranteed to be valid according the existing Java memory model. Furthermore, implementing Java on a shared memory multiprocessor that implements a weak memory model poses some implementation challenges not previously considered. 1 Introduction The Java memory model, as described in chapter 17 of the Java Language Specification [GJS96], is very hard to understand. Research papers that analyze the Java memory model...
Reducing the Cost of Data Flow Analysis By Congruence Partitioning
 In International Conference on Compiler Construction
, 1994
"... . Data flow analysis expresses the solution of an information gathering problem as the fixed point of a system of monotone equations. This paper presents a technique to improve the performance of data flow analysis by systematically reducing the size of the equation system in any monotone data flow ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
. Data flow analysis expresses the solution of an information gathering problem as the fixed point of a system of monotone equations. This paper presents a technique to improve the performance of data flow analysis by systematically reducing the size of the equation system in any monotone data flow problem. Reductions result from partitioning the equations in the system according to congruence relations. We present a fast O(n log n) partitioning algorithm, where n is the size of the program, that exploits known algebraic properties in equation systems. From the resulting partition a reduced equation system is constructed that is minimized with respect to the computed congruence relation while still providing the data flow solution at all program points. 1 Introduction Along with the growing importance of static data flow analysis in current optimizing and parallelizing compilers comes an increased concern about the high time and space requirements of solving data flow problems. Experi...
GURRR: A Global Unified Resource Requirements Representation
 In ACM SIGPLAN Workshop on Intermediate Representations, Sigplan Notices
, 1995
"... When compiling for instruction level parallelism (ILP), the integration of the optimization phases can lead to an improvement in the quality of code generated. However, since several different representations of a program are used in the various phases, only a partial integration has been achieved t ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
When compiling for instruction level parallelism (ILP), the integration of the optimization phases can lead to an improvement in the quality of code generated. However, since several different representations of a program are used in the various phases, only a partial integration has been achieved to date. We present a program representation that combines resource requirements and availability information with control and data dependence information. The representation enables the integration of several optimizing phases, including transformations, register allocation, and instruction scheduling. The basis of this integration is the simultaneous allocation of different types of resources. We define the representation and show how it is constructed. We then formulate several optimization phases to use the representation to achieve better integration. 1 Introduction Recent research has proposed several methods to integrate backend phases of instruction level parallelism (ILP) compilers...
Array Restructuring for Cache Locality
, 1996
"... Caches are used in almost every modern processor design to reduce the long memory access latency, which is increasingly a bottleneck to program performance. For caches to be effective, programs must exhibit good data locality. Thus, an optimizing compiler may have to restructure programs to enhance ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Caches are used in almost every modern processor design to reduce the long memory access latency, which is increasingly a bottleneck to program performance. For caches to be effective, programs must exhibit good data locality. Thus, an optimizing compiler may have to restructure programs to enhance their locality. We focus on the class of restructuring techniques that target array accesses in loops. There are two approaches to enhancing the locality of such accesses: loop restructuring and array restructuring. Under loop restructuring, a compiler adopts a canonical array layout but transforms the order in which loop iterations are performed and thereby reorders the execution of array accesses. Under array restructuring, in contrast, a compiler lays out array elements in an order that matches the access pattern, whi...
BytecodeLevel Analysis And Optimization Of Java Classes
, 1998
"... ....................................... x 1 INTRODUCTION . . . . . . ........................... 1 1.1 Optimization framework ........................... 1 1.2 Measurements................................. 2 1.3 Overview ................................... 2 2 BACKGROUND ....................... ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
....................................... x 1 INTRODUCTION . . . . . . ........................... 1 1.1 Optimization framework ........................... 1 1.2 Measurements................................. 2 1.3 Overview ................................... 2 2 BACKGROUND .................................. 3 2.1 Controlflowgraphs.............................. 3 2.1.1 Dominators . . . ........................... 3 2.1.2 Loops . . . . . . ........................... 4 2.2 Staticsingleassignmentform......................... 6 2.2.1 Construction . . ........................... 7 2.2.2 Destruction . . . ........................... 11 2.3 Partial redundancy elimination . . . . .................... 12 2.3.1 SSAPRE . . . . ........................... 12 2.4 Other optimizations . . . ........................... 14 2.5 Typebasedaliasanalysis........................... 15 2.5.1 Terminology and notation . . .................... 16 v Page 2.5.2 TBAA . . . . . . ........................