Results 1  10
of
50
Interprocedural Dataflow Analysis via Graph Reachability
, 1994
"... This paper shows howalarge class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time. The only restrictions are that the set of dataflow facts is a finite set, and that the dataflow functions distribute overthe confluence operator (either union or intersection). ..."
Abstract

Cited by 369 (33 self)
 Add to MetaCart
This paper shows howalarge class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time. The only restrictions are that the set of dataflow facts is a finite set, and that the dataflow functions distribute overthe confluence operator (either union or intersection). This class of problems includesbut is not limited tothe classical separable problems (also known as "gen/kill" or "bitvector" problems)e.g.,reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes manynonseparable problems, including trulylive variables, copyconstant propagation, and possiblyuninitialized variables. Anovelaspect of our approach is that an interprocedural dataflowanalysis problem is transformed into a special kind of graphreachability problem (reachability along interprocedurally realizable paths). The paper presents three polynomialtime algorithms for the realizablepath reachability problem: an exhaustive version, a second exhaustive version that may be more appropriate in the incremental and/or interactive context, and a demand version. The first and third of these algorithms are asymptotically faster than the best previously known realizablepath reachability algorithm. An additional benefit of our techniques is that theylead to improved algorithms for twoother kinds of interprocedural analysis problems: interprocedural flowsensitive sideeffect problems (as studied by Callahan) and interprocedural program slicing (as studied by Horwitz, Reps, and Binkley).
A Schema for Interprocedural Modification SideEffect Analysis With Pointer Aliasing
 In Proceedings of the SIGPLAN '93 Conference on Programming Language Design and Implementation
, 2001
"... The first interprocedural modification sideeffects analysis for C (MOD_C) that obtains better than worstcase precision on programs with generalpurpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with ..."
Abstract

Cited by 131 (13 self)
 Add to MetaCart
The first interprocedural modification sideeffects analysis for C (MOD_C) that obtains better than worstcase precision on programs with generalpurpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with two independent phases: one for determining pointerinduced aliases and a subsequent one for propagating interprocedural side effects. These MOD_C algorithms are parameterized by the aliasing method used. The empirical results compare the performance of two dissimilar MOD_C algorithms: MOD_C(FSAlias) uses a flowsensitive, callingcontextsensitive interprocedural alias analysis [LR92]; MOD_C(FIAlias) uses a flowinsensitive, callingcontextinsensitive alias analysis which is much faster, but less accurate. These two algorithms were profiled on 45 programs ranging in size from 250 to 30,000 lines of C code, and the results demonstrate dramatically the possible costprecision tradeoffs. This first comparative implementation of MODC analyses offers insight into the differences between flow/contextsensitive and flow/contextinsensitive analyses. The analysis cost versus precision tradeoffs in sideeffect information obtained is reported. The results show surprisingly that the precision of flowsensitive sideeffect analysis is not always prohibitive in cost, and that the precision of flowinsensitive analysis is substantially better than worstcase estimates and seems sufficient for certain applications. On average MODC (FSAlias) for procedures and calls is in the range of 20% more precise than MODC (F IAlias); however, the performance was found to be at least an order of magnitude slower than MODC (F IAlias).
Program Analysis via Graph Reachability
, 1997
"... This paper describes how a number of programanalysis problems can be solved by transforming them to graphreachability problems. Some of the programanalysis problems that are amenable to this treatment include program slicing, certain dataflowanalysis problems, and the problem of approximating th ..."
Abstract

Cited by 119 (8 self)
 Add to MetaCart
This paper describes how a number of programanalysis problems can be solved by transforming them to graphreachability problems. Some of the programanalysis problems that are amenable to this treatment include program slicing, certain dataflowanalysis problems, and the problem of approximating the possible "shapes" that heapallocated structures in a program can take on. Relationships between graph reachability and other approaches to program analysis are described. Some techniques that go beyond pure graph reachability are also discussed.
Weighted pushdown systems and their application to interprocedural dataflow analysis
 Sci. of Comp. Prog
, 2003
"... Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meetoverallpaths value (for paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain ..."
Abstract

Cited by 103 (32 self)
 Add to MetaCart
Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meetoverallpaths value (for paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain classes of interprocedural dataflowanalysis problems. 1
Interprocedural Conditional Branch Elimination
, 1997
"... The existence of statically detectable correlation among conditional branches enables their elimination, an optimization that has a number of benefits. This paper presents techniques to determine whether an interprocedural execution path leading to a conditional branch exists along which the branch ..."
Abstract

Cited by 67 (15 self)
 Add to MetaCart
The existence of statically detectable correlation among conditional branches enables their elimination, an optimization that has a number of benefits. This paper presents techniques to determine whether an interprocedural execution path leading to a conditional branch exists along which the branch outcome is known at compile time, and then to eliminate the branch along this path through code restructuring. The technique consists of a demand driven interprocedural analysis that determines whether a specific branch outcome is correlated with prior statements or branch outcomes. The optimization is performed using a code restructuring algorithm that replicates code to separate out the paths with correlation. When the correlated path is affected by a procedure call, the restructuring is based on procedure entry splitting and exit splitting. The entry splitting transformation creates multiple entries to a procedure, and the exit splitting transformation allows a procedure to return control...
A Practical Framework for DemandDriven Interprocedural Data Flow Analysis
 ACM Transactions on Programming Languages and Systems
, 1998
"... this article, we present a general framework for developing demanddriven interprocedural data flow analyzers and report our experience in evaluating the performance of this approach. A demand for data flow information is modeled as a set of queries. The framework includes a generic demanddriven al ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
this article, we present a general framework for developing demanddriven interprocedural data flow analyzers and report our experience in evaluating the performance of this approach. A demand for data flow information is modeled as a set of queries. The framework includes a generic demanddriven algorithm that determines the response to a query by iteratively applying a system of query propagation rules. The propagation rules yield precise responses for the class of distributive finite data flow problems. We also describe a twophase framework variation to accurately handle nondistributive problems. A performance evaluation of our demanddriven approach is presented for two data flow problems, namely, reachingdefinitions and copy constant propagation. Our experiments show that demanddriven analysis performs well in practice, reducing both time and space requirements when compared with exhaustive analysis.
DemandDriven Pointer Analysis
, 2001
"... Known algorithms for pointer analysis are "global" in the sense that they perform an exhaustive analysis of a program or program component. In this paper we introduce a demanddriven approach for pointer analysis. Specifically, we describe a demanddriven flowinsensitive, subsetbased, contextinse ..."
Abstract

Cited by 57 (0 self)
 Add to MetaCart
Known algorithms for pointer analysis are "global" in the sense that they perform an exhaustive analysis of a program or program component. In this paper we introduce a demanddriven approach for pointer analysis. Specifically, we describe a demanddriven flowinsensitive, subsetbased, contextinsensitive pointsto analysis. Given a list of pointer variables (a query), our analysis performs just enough computation to determine the pointsto sets for these query variables. Using deductive reachability formulations of both the exhaustive and the demanddriven analyses, we prove that our algorithm is correct. We also show that our analysis is optimal in the sense that it does not do more work than necessary. We illustrate the feasibility and efficiency of our analysis with an implementation of demanddriven pointsto analysis for computing the callgraphs of C programs with function pointers. The performance of our system varies substantially across benchmarks  the main factor is how much of the pointsto graph must be computed to determine the callgraph. For some benchmarks, only a small part of the pointsto graph is needed (e.g povray, emacs and gcc), and here we see more than a 10x speedup. For other benchmarks (e.g. burlap and gimp), we need to compute most (? 95%) of the pointsto graph, and here the demanddriven algorithm is considerably slower, because using the demanddriven algorithm is a slow method of computing the full pointsto graph.
Transparent dynamic optimization: The design and implementation of Dynamo
, 1999
"... Dynamic optimization refers to the runtime optimization of a native program binary. This report describes the design and implementation of Dynamo, a prototype dynamic optimizer that is capable of optimizing a native program binary at runtime. Dynamo is a realistic implementation, not a simulation, t ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
Dynamic optimization refers to the runtime optimization of a native program binary. This report describes the design and implementation of Dynamo, a prototype dynamic optimizer that is capable of optimizing a native program binary at runtime. Dynamo is a realistic implementation, not a simulation, that is written entirely in userlevel software, and runs on a PARISC machine under the HPUX operating system. Dynamo does not depend on any special programming language, compiler, operating system or hardware support. Contrary to
Generation of efficient interprocedural analyzers with PAG
 In Proceedings of the Second INternational Symposium on Static Analysis
, 1995
"... . To produce high quality code, modern compilers use global optimization algorithms based on abstract interpretation. These algorithms are rather complex; their implementation is therefore a nontrivial task and errorprone. However, since they are based on a common theory, they have large similar ..."
Abstract

Cited by 48 (7 self)
 Add to MetaCart
. To produce high quality code, modern compilers use global optimization algorithms based on abstract interpretation. These algorithms are rather complex; their implementation is therefore a nontrivial task and errorprone. However, since they are based on a common theory, they have large similar parts. We conclude that analyzer writing better should be replaced with analyzer generation. We present the tool PAG that has a high level functional input language to specify data flow analyses. It offers the specification of even recursive data structures and is therefore not limited to bit vector problems. PAG generates efficient analyzers which can be easily integrated in existing compilers. The analyzers are interprocedural, they can handle recursive procedures with local variables and higher order functions. PAG has successfully been tested by generating several analyzers (e.g. alias analysis, constant propagation) for an industrial quality ANSIC and Fortran90 compiler. Keywords: d...