Results 1  10
of
81
Interprocedural dataflow analysis via graph reachability
, 1994
"... The paper shows how a large class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time by transforming them into a special kind of graphreachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow fun ..."
Abstract

Cited by 454 (34 self)
 Add to MetaCart
(Show Context)
The paper shows how a large class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time by transforming them into a special kind of graphreachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of problems includes—but is not limited to—the classical separable problems (also known as “gen/kill ” or “bitvector” problems)—e.g., reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes many nonseparable problems, including trulylive variables, copy constant propagation, and possiblyuninitialized variables. Results are reported from a preliminary experimental study of C programs (for the problem of finding possiblyuninitialized variables). 1.
Program Analysis via Graph Reachability
, 1997
"... This paper describes how a number of programanalysis problems can be solved by transforming them to graphreachability problems. Some of the programanalysis problems that are amenable to this treatment include program slicing, certain dataflowanalysis problems, and the problem of approximating th ..."
Abstract

Cited by 157 (7 self)
 Add to MetaCart
This paper describes how a number of programanalysis problems can be solved by transforming them to graphreachability problems. Some of the programanalysis problems that are amenable to this treatment include program slicing, certain dataflowanalysis problems, and the problem of approximating the possible "shapes" that heapallocated structures in a program can take on. Relationships between graph reachability and other approaches to program analysis are described. Some techniques that go beyond pure graph reachability are also discussed.
Weighted pushdown systems and their application to interprocedural dataflow analysis
 Sci. of Comp. Prog
, 2003
"... Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meetoverallpaths value (for paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain ..."
Abstract

Cited by 140 (31 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meetoverallpaths value (for paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain classes of interprocedural dataflowanalysis problems. 1
A Schema for Interprocedural Modification SideEffect Analysis With Pointer Aliasing
, 2001
"... The first interprocedural modification sideeffects analysis for C (MODC) that obtains better than worstcase precision on programs with generalpurpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with t ..."
Abstract

Cited by 139 (12 self)
 Add to MetaCart
The first interprocedural modification sideeffects analysis for C (MODC) that obtains better than worstcase precision on programs with generalpurpose pointer usage is presented with empirical results. The analysis consists of an algorithm schema corresponding to a family of MODC algorithms with two independent phases: one for determining pointerinduced aliases and a subsequent one for propagating interprocedural side effects. These MODC algorithms are parameterized by the aliasing method used. The empirical results compare the performance of two dissimilar MODC algorithms: MODC(FSAlias) uses a flowsensitive, callingcontextsensitive interprocedural alias analysis; MODC(FIAlias) uses a flowinsensitive, callingcontextinsensitive alias analysis which is much faster, but less accurate. These two algorithms were profiled on 45 programs ranging in size from 250 to 30,000 lines of C code, and the results demonstrate dramatically the possible costprecision tradeoffs. This first comparative implementation of MODC analyses offers insight into the differences between flow/contextsensitive and flow/contextinsensitive analyses. The analysis cost versus precision tradeoffs in sideeffect information obtained are reported. The results show surprisingly that the precision of flowsensitive sideeffect analysis is not always prohibitive in cost, and that the precision of flowinsensitive analysis is substantially better than worstcase estimates
Demand Interprocedural Dataflow Analysis
, 1995
"... An exhaustive dataflow analysis algorithm associates with each point in a program a set of “dataflow facts ” that are guaranteed to hold whenever that point is reached during program execution. By contrast, a demand dataflow analysis algorithm determines whether a single given dataflow fact holds at ..."
Abstract

Cited by 83 (9 self)
 Add to MetaCart
An exhaustive dataflow analysis algorithm associates with each point in a program a set of “dataflow facts ” that are guaranteed to hold whenever that point is reached during program execution. By contrast, a demand dataflow analysis algorithm determines whether a single given dataflow fact holds at a single given point. This paper presents a new demand algorithm for interprocedural dataflow analysis. The new algorithm has three important properties: ● It provides precise (meet over all interprocedurally valid paths) solutions to a large class of problems. ● It has a polynomial worstcase cost for both a single demand and a sequence of all possible demands. ● The worstcase total cost of the sequence of all possible demands is no worse than the worstcase cost of a single run of the current best exhaustive algorithm.
Interprocedural Conditional Branch Elimination
, 1997
"... The existence of statically detectable correlation among conditional branches enables their elimination, an optimization that has a number of benefits. This paper presents techniques to determine whether an interprocedural execution path leading to a conditional branch exists along which the branch ..."
Abstract

Cited by 74 (17 self)
 Add to MetaCart
The existence of statically detectable correlation among conditional branches enables their elimination, an optimization that has a number of benefits. This paper presents techniques to determine whether an interprocedural execution path leading to a conditional branch exists along which the branch outcome is known at compile time, and then to eliminate the branch along this path through code restructuring. The technique consists of a demand driven interprocedural analysis that determines whether a specific branch outcome is correlated with prior statements or branch outcomes. The optimization is performed using a code restructuring algorithm that replicates code to separate out the paths with correlation. When the correlated path is affected by a procedure call, the restructuring is based on procedure entry splitting and exit splitting. The entry splitting transformation creates multiple entries to a procedure, and the exit splitting transformation allows a procedure to return control...
Demanddriven pointer analysis.
 In Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation,
, 2001
"... ..."
A Practical Framework for DemandDriven Interprocedural Data Flow Analysis
 ACM Transactions on Programming Languages and Systems
, 1998
"... this article, we present a general framework for developing demanddriven interprocedural data flow analyzers and report our experience in evaluating the performance of this approach. A demand for data flow information is modeled as a set of queries. The framework includes a generic demanddriven al ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
(Show Context)
this article, we present a general framework for developing demanddriven interprocedural data flow analyzers and report our experience in evaluating the performance of this approach. A demand for data flow information is modeled as a set of queries. The framework includes a generic demanddriven algorithm that determines the response to a query by iteratively applying a system of query propagation rules. The propagation rules yield precise responses for the class of distributive finite data flow problems. We also describe a twophase framework variation to accurately handle nondistributive problems. A performance evaluation of our demanddriven approach is presented for two data flow problems, namely, reachingdefinitions and copy constant propagation. Our experiments show that demanddriven analysis performs well in practice, reducing both time and space requirements when compared with exhaustive analysis.
Transparent dynamic optimization: The design and implementation of Dynamo
, 1999
"... Dynamic optimization refers to the runtime optimization of a native program binary. This report describes the design and implementation of Dynamo, a prototype dynamic optimizer that is capable of optimizing a native program binary at runtime. Dynamo is a realistic implementation, not a simulation, t ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
(Show Context)
Dynamic optimization refers to the runtime optimization of a native program binary. This report describes the design and implementation of Dynamo, a prototype dynamic optimizer that is capable of optimizing a native program binary at runtime. Dynamo is a realistic implementation, not a simulation, that is written entirely in userlevel software, and runs on a PARISC machine under the HPUX operating system. Dynamo does not depend on any special programming language, compiler, operating system or hardware support. Contrary to
Generation of efficient interprocedural analyzers with PAG
 In Proceedings of the Second INternational Symposium on Static Analysis
, 1995
"... . To produce high quality code, modern compilers use global optimization algorithms based on abstract interpretation. These algorithms are rather complex; their implementation is therefore a nontrivial task and errorprone. However, since they are based on a common theory, they have large similar ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
. To produce high quality code, modern compilers use global optimization algorithms based on abstract interpretation. These algorithms are rather complex; their implementation is therefore a nontrivial task and errorprone. However, since they are based on a common theory, they have large similar parts. We conclude that analyzer writing better should be replaced with analyzer generation. We present the tool PAG that has a high level functional input language to specify data flow analyses. It offers the specification of even recursive data structures and is therefore not limited to bit vector problems. PAG generates efficient analyzers which can be easily integrated in existing compilers. The analyzers are interprocedural, they can handle recursive procedures with local variables and higher order functions. PAG has successfully been tested by generating several analyzers (e.g. alias analysis, constant propagation) for an industrial quality ANSIC and Fortran90 compiler. Keywords: d...