Results 1  10
of
12
Efficient ContextSensitive Pointer Analysis for C Programs
, 1995
"... This paper proposes an efficient technique for contextsensitive pointer analysis that is applicable to real C programs. For efficiency, we summarize the effects of procedures using partial transfer functions. A partial transfer function (PTF) describes the behavior of a procedure assuming that certa ..."
Abstract

Cited by 435 (7 self)
 Add to MetaCart
(Show Context)
This paper proposes an efficient technique for contextsensitive pointer analysis that is applicable to real C programs. For efficiency, we summarize the effects of procedures using partial transfer functions. A partial transfer function (PTF) describes the behavior of a procedure assuming that certain alias relationships hold when it is called. We can reuse a PTF in many calling contexts as long as the aliases among the inputs to the procedure are the same. Our empirical results demonstrate that this technique is successfula single PTF per procedure is usually sufficient to obtain completely contextsensitive results. Because many C programs use features such as type casts and pointer arithmetic to circumvent the highlevel type system, our algorithm is based on a lowlevel representation of memory locations that safely handles all the features of C. We have implemented our algorithm in the SUIF compiler system and we show that it runs efficiently for a set of C benchmarks. 1 Introd...
A Schema for Interprocedural Modification SideEffect Analysis With Pointer Aliasing
, 2001
"... The first interprocedural modification sideeffects analysis for C (MODC) that obtains better than worstcase precision on programs with generalpurpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with t ..."
Abstract

Cited by 142 (12 self)
 Add to MetaCart
The first interprocedural modification sideeffects analysis for C (MODC) that obtains better than worstcase precision on programs with generalpurpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with two independent phases: one for determining pointerinduced aliases and a subsequent one for propagating interprocedural side effects. These MODC algorithms are parameterized by the aliasing method used. The empirical results compare the performance of two dissimilar MODC algorithms: MODC(FSAlias) uses a flowsensitive, callingcontextsensitive interprocedural alias analysis; MODC(FIAlias) uses a flowinsensitive, callingcontextinsensitive alias analysis which is much faster, but less accurate. These two algorithms were profiled on 45 programs ranging in size from 250 to 30,000 lines of C code, and the results demonstrate dramatically the possible costprecision tradeoffs. This first comparative implementation of MODC analyses offers insight into the differences between flow/contextsensitive and flow/contextinsensitive analyses. The analysis cost versus precision tradeoffs in sideeffect information obtained are reported. The results show surprisingly that the precision of flowsensitive sideeffect analysis is not always prohibitive in cost, and that the precision of flowinsensitive analysis is substantially better than worstcase estimates
Managing Interprocedural Optimization
, 1991
"... This dissertation addresses a number of important issues related to interprocedural optimization. Interprocedural optimization is an integral component in a compilation system for highperformance computing. The importance of interprocedural optimization stems from two sources: it increases the cont ..."
Abstract

Cited by 68 (9 self)
 Add to MetaCart
This dissertation addresses a number of important issues related to interprocedural optimization. Interprocedural optimization is an integral component in a compilation system for highperformance computing. The importance of interprocedural optimization stems from two sources: it increases the context available to the optimizing compiler, and it enables programmers to use procedure calls without the concern of hurting execution time. While important, interprocedural optimization can introduce some significant compiletime costs. When interprocedural information is used to optimize a procedure, the procedure is then dependent on those interprocedural facts. Thus, even if the procedure is not edited, it may require recompilation due to changes in the interprocedural facts. In addition to these effects on recompilation, interprocedural information can also be expensive to compute. Furthermore, interprocedural optimizations can increase program size which can in turn increase compile tim...
Efficient Call Graph Analysis
 ACM LETTERS ON PROGRAMMING LANGUAGES AND SYSTEMS
, 1992
"... We present an efficient algorithm for computing the procedure call graph, the program representation underlying most interprocedural optimization techniques. The algorithm computes the possible bindings of procedure variables in languages where such variables only receive their values through parame ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
(Show Context)
We present an efficient algorithm for computing the procedure call graph, the program representation underlying most interprocedural optimization techniques. The algorithm computes the possible bindings of procedure variables in languages where such variables only receive their values through parameter passing, such as Fortran. We extend the algorithm to accommodate a limited form of assignments to procedure variables. The resulting algorithm can also be used in analysis of functional programs that have been converted to Continuation Passing Style. We discuss the algorithm in relationship to other call graph analysis approaches. Many less efficient techniques produce essentially the same call graph. A few algorithms are more precise, but they may be prohibitively expensive depending on language features.
Interprocedural optimization: eliminating unnecessary recompilation
 ACM Transactions on Programming Languages and Systems
, 1993
"... While efficient new algorithms for interprocedural dataflow analysis have made these techniques practical for use in production compilation systems,a new problem has arisen: collecting and using interprocedural information in a compiler introduces subtle dependence among the proceduresof a program. ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
While efficient new algorithms for interprocedural dataflow analysis have made these techniques practical for use in production compilation systems,a new problem has arisen: collecting and using interprocedural information in a compiler introduces subtle dependence among the proceduresof a program. If the compiler dependson interprocedural information to optimize a given module, a subsequentediting changeto another module in the program may changethe interprocedural information and necessitaterecompilation. To avoid having to recompile every module in a program in responseto a single editing changeto one module, we have developed techniquesto more precisely determine which compilations have actually beeninvalidated by a changeto the program’s source.This paper presents a general recoinpzlatton test to determine which proceduresmust be compiled in responseto a series of editing changes.Three different implementation strategies, which demonstrate the fundamental tradeoff between the cost of analysis and the precision of the resulting test, are also discussed.
Interprocedural load elimination for dynamic optimization of parallel programs
 In International Conference on Parallel Architectures and Compilation Techniques, (PACT’09), North
, 2009
"... Abstract—Load elimination is a classical compiler transformation that is increasing in importance for multicore and manycore architectures. The effect of the transformation is to replace a memory access, such as a read of an object field or an array element, by a read of a compilergenerated temp ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
Abstract—Load elimination is a classical compiler transformation that is increasing in importance for multicore and manycore architectures. The effect of the transformation is to replace a memory access, such as a read of an object field or an array element, by a read of a compilergenerated temporary that can be allocated in faster and more energyefficient storage structures such as registers and local memories (scratchpads). Unfortunately, current justintime and dynamic compilers perform load elimination only in limited situations. In particular, they usually make worstcase assumptions about potential side effects arising from parallel constructs and method calls. These two constraints interact with each other since parallel constructs are usually translated to lowlevel runtime library calls. In this paper, we introduce an interprocedural load elimination algorithm suitable for use in dynamic optimization of parallel programs. The main contributions of the paper include: a) an algorithm for load elimination in the presence of three core parallel constructs – async, finish, and isolated, b) efficient sideeffect analysis for method calls, c) extended sideeffect analysis for parallel constructs using an Isolation Consistency memory model, and d) performance results to study the impact of load elimination on a set of standard benchmarks using an implementation of the algorithm in Jikes RVM for optimizing programs written in a subset of the X10 v1.5 language. Our performance results show decreases in dynamic counts for getfield operations of up to 99.99%, and performance improvements of up to 1.76 × on 1 core, and 1.39 × on 16 cores, when comparing the algorithm in this paper with the load elimination algorithm available in Jikes RVM. KeywordsLoad elimination; scalar replacement; parallel program; dynamic compilation; dynamic optimization; memory model. I.
An Empirical Study of Function Pointers Using SPEC Benchmarks
, 1999
"... Since the C language imposes little restriction on the use of function pointers, the task of call graph construction for a C program is far more difficult than what the algorithms designed for Fortran can handle. From the experience of implementing a call graph extractor in the IMPACT compiler, we f ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Since the C language imposes little restriction on the use of function pointers, the task of call graph construction for a C program is far more difficult than what the algorithms designed for Fortran can handle. From the experience of implementing a call graph extractor in the IMPACT compiler, we found the call graph construction problem has evolved into an interprocedural pointer analysis problem. A complete and precise call graph can be constructed from fully resolved function pointers. In this paper, we report an empirical study of function pointers in the complete SPECint92 and SPECint95 benchmarks. We evaluate the resolution of function pointers and the potential program transformations enabled by a complete call graph. We also examine several real examples of function pointer manipulation found in these benchmarks. They can be considered as critical issues in the design of a complete interprocedural pointer analysis algorithm. 2 1 Introduction With the rapid advancement in mo...
D. B. Lomet Data Flow Analysis in the Presence of Procedure Calls
"... Abstract: The aliasing that results in a variable being known by more than one name has greatly complicated efforts to derive data flow information. The approach we take involves the use of a series of claims that, after we compute the data flow for some of the aliasing possibilities, allows us to p ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: The aliasing that results in a variable being known by more than one name has greatly complicated efforts to derive data flow information. The approach we take involves the use of a series of claims that, after we compute the data flow for some of the aliasing possibilities, allows us to produce good approximations for the remaining cases. The method can thus limit the potential combinatorial explosion of aliasing computations while providing results that are frequently exact and almost always very good. The method is illustrated in the context of data flow analysis involving multiple procedures and their calling interactions. It is applicable also in the treatment of recursive procedures.