Results 1  10
of
21
Efficiently computing static single assignment form and the control dependence graph
 ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1991
"... In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single ass ..."
Abstract

Cited by 839 (7 self)
 Add to MetaCart
In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow propertiee of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimization. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use dominance frontiers, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Dynamic program slicing
 Information Processing Letters, 29(Oct
, 1988
"... A dynamic program slice is an executable subset of the original program that produces the same computations on a subset of selected variables and inputs. It differs from the static slice (Weiser, 1982, 1984) in that it is entirely defined on the basis of a computation. The two main advantages are th ..."
Abstract

Cited by 243 (2 self)
 Add to MetaCart
A dynamic program slice is an executable subset of the original program that produces the same computations on a subset of selected variables and inputs. It differs from the static slice (Weiser, 1982, 1984) in that it is entirely defined on the basis of a computation. The two main advantages are the following: Arrays and dynamic data structures can be handled more precisely and the size of slice can be significantly reduced, leading to a finer localization of the fault. The approach is being investigated as a possible extension of the debugging capabilities of STAD, a recently developed System for Testing and Debugging (Korel and Laski, 1987; La&i, 1987).
Efficient FlowSensitive Interprocedural Computation of PointerInduced Aliases and Side Effects
, 1993
"... We present practical approximation methods for computing interprocedural aliases and side effects for a program written in a language that includes pointers, reference parameters and recursion. We present the following results: 1) An algorithm for flowsensitive interprocedural alias analysis which ..."
Abstract

Cited by 220 (11 self)
 Add to MetaCart
We present practical approximation methods for computing interprocedural aliases and side effects for a program written in a language that includes pointers, reference parameters and recursion. We present the following results: 1) An algorithm for flowsensitive interprocedural alias analysis which is more precise and efficient than the best interprocedural method known. 2) An extension of traditional flowinsensitive alias analysis which accommodates pointers and provides a framework for a family of algorithms which trade off precision for efficiency. 3) An algorithm which correctly computes side effects in the presence of pointers. Pointers cannot be correctly handled by conventional methods for side effect analysis. 4) An alias naming technique which handles dynamically allocated objects and guarantees the correctness of dataflow analysis. 5) A compact representation based on transitive reduction which does not result in a loss of precision and improves precision in some cases. 6)...
Polymorphic Type, Region and Effect Inference
, 1991
"... We present a new static system that reconstructs the types, regions and effects of expressions in an implicitly typed functional language that supports imperative operations on reference values. Just as types structurally abstract collections of concrete values, regions represent sets of possibly a ..."
Abstract

Cited by 121 (6 self)
 Add to MetaCart
We present a new static system that reconstructs the types, regions and effects of expressions in an implicitly typed functional language that supports imperative operations on reference values. Just as types structurally abstract collections of concrete values, regions represent sets of possibly aliased reference values and effects represent approximations of the imperative behavior on regions. We introduce a static semantics for inferring types, regions and effects and prove that it is consistent with respect to the dynamic semantics of the language. We present a reconstruction algorithm that computes the types and effects of expressions and assigns regions to reference values. We prove the correctness of the reconstruction algorithm with respect to the static semantics. Finally, we discuss potential applications of our system to automatic stack allocation and parallel code generation.
Interprocedural Pointer Alias Analysis
 ACM Transactions on Programming Languages and Systems
, 1999
"... this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions: ..."
Abstract

Cited by 99 (8 self)
 Add to MetaCart
this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions:
The Interprocedural Coincidence Theorem
 In Int. Conf. on Comp. Construct
, 1992
"... We present an interprocedural generalization of the wellknown (intraprocedural) Coincidence Theorem of Kam and Ullman, which provides a sufficient condition for the equivalence of the meet over all paths (MOP ) solution and the maximal fixed point (MFP ) solution to a data flow analysis problem. Th ..."
Abstract

Cited by 87 (11 self)
 Add to MetaCart
We present an interprocedural generalization of the wellknown (intraprocedural) Coincidence Theorem of Kam and Ullman, which provides a sufficient condition for the equivalence of the meet over all paths (MOP ) solution and the maximal fixed point (MFP ) solution to a data flow analysis problem. This generalization covers arbitrary imperative programs with recursive procedures, global and local variables, and formal value parameters. In the absence of procedures, it reduces to the classical intraprocedural version. In particular, our stackbased approach generalizes the coincidence theorems of Barth and Sharir/Pnueli for the same setup, which do not properly deal with local variables of recursive procedures. 1 Motivation Data flow analysis is a classical method for the static analysis of programs that supports the generation of efficient object code by "optimizing" compilers (cf. [He, MJ]). For imperative languages, it provides information about the program states that may occur at s...
Interprocedural Data Flow Analysis In The Presence Of Pointers, Procedure Variables, And Label Variables
, 1980
"... Acknowledgements ................................................. 3 0. Contents ......................................................... 4 1. ..."
Abstract

Cited by 69 (0 self)
 Add to MetaCart
Acknowledgements ................................................. 3 0. Contents ......................................................... 4 1.
Managing Interprocedural Optimization
, 1991
"... This dissertation addresses a number of important issues related to interprocedural optimization. Interprocedural optimization is an integral component in a compilation system for highperformance computing. The importance of interprocedural optimization stems from two sources: it increases the cont ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
This dissertation addresses a number of important issues related to interprocedural optimization. Interprocedural optimization is an integral component in a compilation system for highperformance computing. The importance of interprocedural optimization stems from two sources: it increases the context available to the optimizing compiler, and it enables programmers to use procedure calls without the concern of hurting execution time. While important, interprocedural optimization can introduce some significant compiletime costs. When interprocedural information is used to optimize a procedure, the procedure is then dependent on those interprocedural facts. Thus, even if the procedure is not edited, it may require recompilation due to changes in the interprocedural facts. In addition to these effects on recompilation, interprocedural information can also be expensive to compute. Furthermore, interprocedural optimizations can increase program size which can in turn increase compile tim...
Compiling for the Multiscalar Architecture
, 1998
"... Highperformance, generalpurpose microprocessors serve as compute engines for computers ranging from personal computers to supercomputers. Sequential programs constitute a major portion of realworld software that run on the computers. Stateoftheart microprocessors exploit instruction level para ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
Highperformance, generalpurpose microprocessors serve as compute engines for computers ranging from personal computers to supercomputers. Sequential programs constitute a major portion of realworld software that run on the computers. Stateoftheart microprocessors exploit instruction level parallelism (ILP) to achieve high performance on such applications by searching for independent instructions in a dynamic window of instructions and executing them on a wideissue pipeline. Increasing the window size and the issue width to extract more ILP may hinder achieving high clock speeds, limiting overall performance. The Multiscalar architecture employs multiple small windows and many narrowissue processing units to exploit ILP at high clock speeds. Sequential programs are partitioned into code fragments called tasks, which are speculatively executed in parallel. Intertask register dependences are honored via communication and synchronization and intertask control flow and memory depe...