Results 1 
6 of
6
Efficiently computing static single assignment form and the control dependence graph
 ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1991
"... In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single ass ..."
Abstract

Cited by 1000 (8 self)
 Add to MetaCart
In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow propertiee of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimization. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use dominance frontiers, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Parallelizing Compilers: Implementation and Effectiveness
, 1993
"... An important thank you goes to one of my undergraduate professors, Ken Kennedy. He proposed the project that led to this thesis, and my desire to know the answer gave me the strength to complete this work. I would like to thank the languages group at Kubota Pacific Computers, Inc. for showing me tha ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
An important thank you goes to one of my undergraduate professors, Ken Kennedy. He proposed the project that led to this thesis, and my desire to know the answer gave me the strength to complete this work. I would like to thank the languages group at Kubota Pacific Computers, Inc. for showing me that I could indeed be productive and that all problems in compilers did not take years to solve. My sanity is thanks to all of my friends from dancing, &quot;O &quot; runs, and everything else. They made it possible to return to work each day and eventually to graduate. I owe my parents a great debt for encouraging me to stay in graduate school even when I thought I would never finish. Last, but certainly not least, I would like to thank Don Ramsey for reading many drafts and listening to many dry runs. His input greatly helped the presentation of this thesis in both oral and written forms.
A Uniform Internal Representation for HighLevel and InstructionLevel Transformations Eduard Ayguadé, Cristina Barrado, Jesús Labarta, David López, Susana Moreno, David Padua, and Mateo Valero
, 1995
"... this paper we describe a strategy that will make it possible, after applying a small number of changes, to represent lowlevel operations as part of the internal representation of a conventional sourcetosource Fortran translator. Briefly, our strategy is to represent the lowlevel operations as Fo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
this paper we describe a strategy that will make it possible, after applying a small number of changes, to represent lowlevel operations as part of the internal representation of a conventional sourcetosource Fortran translator. Briefly, our strategy is to represent the lowlevel operations as Fortran statements. In this way, all the transformation and analysis routines available in the sourcetosource restructurer can be applied to the lowlevel representation of the program. The sourcetosource parallelizer could then be extended to include many traditional analysis and transformation steps, such as strength reduction and register allocation, not usually performed by this translator. The generation of machine instructions is done as a last step by a direct mapping from each Fortran statement onto one or more machine instructions. The sourcetosource restructurer is therefore extended into a complete compiler as shown in Figure 1. All transformations, including highlevel parallelization and the traditional scalar optimizations, can now be performed in a unified framework based on a single internal representation. One additional advantage of representing the lowlevel operations as Fortran statements is that the outcome of each transformation, both high and low level, can be trivially transformed into a Fortran program that could be executed to test the correctness of the transformation. Another approach that also uses a uniform representation for both highlevel parallelization and scalar optimizations was the one followed in the IBM Fortran compiler [ScKo86]. The main difference with our approach is that this compiler evolved from a traditional backend compiler which was extended to do some of the highlevel transformations usually performed in other systems by...
A TECHNIQUE TO EVALUATE BENCHMARKS: A CASE STUDY USING THE LIVERMORE LOOPS
"... This paper is devoted to an analysis of the data from the Livermore kernels benchmark. We will show that in the sense of least squares prediction the dimension of these data is rather small; a reduction of the data to dimension four has about the same predictive power as the original data. Two tech ..."
Abstract
 Add to MetaCart
(Show Context)
This paper is devoted to an analysis of the data from the Livermore kernels benchmark. We will show that in the sense of least squares prediction the dimension of these data is rather small; a reduction of the data to dimension four has about the same predictive power as the original data. Two techniques are used that reduce the 72 kernel timings for each machine to a few scores by which the machine is characterized. The first is based on a principal component analysis, the second on a cluster analysis of the kernels. The validity of the reduction to lower dimension is checked by various means. The possible use of the Livermore data to predict the running time of larger codes is demonstrated.