Results 1  10
of
102
Interprocedural dataflow analysis via graph reachability
, 1994
"... The paper shows how a large class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time by transforming them into a special kind of graphreachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow fun ..."
Abstract

Cited by 456 (34 self)
 Add to MetaCart
(Show Context)
The paper shows how a large class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time by transforming them into a special kind of graphreachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow functions must distribute over the confluence operator (either union or intersection). This class of problems includes—but is not limited to—the classical separable problems (also known as “gen/kill ” or “bitvector” problems)—e.g., reaching definitions, available expressions, and live variables. In addition, the class of problems that our techniques handle includes many nonseparable problems, including trulylive variables, copy constant propagation, and possiblyuninitialized variables. Results are reported from a preliminary experimental study of C programs (for the problem of finding possiblyuninitialized variables). 1.
Bebop: A Symbolic Model Checker for Boolean Programs
, 2000
"... We present the design, implementation and empirical evaluation of Bebop  a symbolic model checker for boolean programs. Bebop represents control flow explicitly, and sets of states implicitly using BDDs. By harnessing the inherent modularity in procedural abstraction and exploiting the locality of ..."
Abstract

Cited by 265 (25 self)
 Add to MetaCart
(Show Context)
We present the design, implementation and empirical evaluation of Bebop  a symbolic model checker for boolean programs. Bebop represents control flow explicitly, and sets of states implicitly using BDDs. By harnessing the inherent modularity in procedural abstraction and exploiting the locality of variable scoping, Bebop is able to model check boolean programs with several thousand lines of code, hundreds of procedures, and several thousand variables in a few minutes.
Weighted pushdown systems and their application to interprocedural dataflow analysis
 Sci. of Comp. Prog
, 2003
"... Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meetoverallpaths value (for paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain ..."
Abstract

Cited by 142 (33 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, pushdown systems (PDSs) have been extended to weighted PDSs, in which each transition is labeled with a value, and the goal is to determine the meetoverallpaths value (for paths that meet a certain criterion). This paper shows how weighted PDSs yield new algorithms for certain classes of interprocedural dataflowanalysis problems. 1
Optimal Code Motion: Theory and Practice
, 1993
"... An implementation oriented algorithm for lazy code motion is presented that minimizes the number of computations in programs while suppressing any unnecessary code motion in order to avoid superfluous register pressure. In particular, this variant of the original algorithm for lazy code motion works ..."
Abstract

Cited by 115 (18 self)
 Add to MetaCart
An implementation oriented algorithm for lazy code motion is presented that minimizes the number of computations in programs while suppressing any unnecessary code motion in order to avoid superfluous register pressure. In particular, this variant of the original algorithm for lazy code motion works on flowgraphs whose nodes are basic blocks rather than single statements, as this format is standard in optimizing compilers. The theoretical foundations of the modified algorithm are given in the first part, where trefined flowgraphs are introduced for simplifying the treatment of flowgraphs whose nodes are basic blocks. The second part presents the `basic block' algorithm in standard notation, and gives directions for its implementation in standard compiler environments. Keywords Elimination of partial redundancies, code motion, data flow analysis (bitvector, unidirectional, bidirectional), nondeterministic flowgraphs, trefined flow graphs, critical edges, lifetimes of registers, com...
Demanddriven Computation of Interprocedural Data Flow
, 1995
"... This paper presents a general framework for deriving demanddriven algorithms for interprocedural data flow analysis of imperative programs. The goal of demanddriven analysis is to reduce the time and/or space overhead of conventional exhaustive analysis by avoiding the collection of information tha ..."
Abstract

Cited by 82 (9 self)
 Add to MetaCart
This paper presents a general framework for deriving demanddriven algorithms for interprocedural data flow analysis of imperative programs. The goal of demanddriven analysis is to reduce the time and/or space overhead of conventional exhaustive analysis by avoiding the collection of information that is not needed. In our framework, a demand for data flow information is modeled as a set of data flow queries. The derived demanddriven algorithms find responses to these queries through a partial reversal of the respective data flow analysis. Depending on whether minimizing time or space is of primary concern, result caching may be incorporated in the derived algorithm. Our framework is applicable to interprocedural data flow problems with a finite domain set. If the problem's flow functions are distributive, the derived demand algorithms provide as precise information as the corresponding exhaustive analysis. For problems with monotone but nondistributive flow functions the provided dat...
Precise Interprocedural Analysis through Linear Algebra
, 2004
"... We apply linear algebra techniques to precise interprocedural dataflow analysis. Specifically, we describe analyses that determine for each program point identities that are valid among the program variables whenever control reaches that program point. Our analyses fully interpret assignment stateme ..."
Abstract

Cited by 82 (12 self)
 Add to MetaCart
We apply linear algebra techniques to precise interprocedural dataflow analysis. Specifically, we describe analyses that determine for each program point identities that are valid among the program variables whenever control reaches that program point. Our analyses fully interpret assignment statements with affine expressions on the right hand side while considering other assignments as nondeterministic and ignoring conditions at branches. Under this abstraction, the analysis computes the set of all affine relations and, more generally, all polynomial relations of bounded degree precisely. The running time of our algorithms is linear in the program size and polynomial in the number of occurring variables. We also show how to deal with affine preconditions and local variables and indicate how to handle parameters and return values of procedures.
Reducing Concurrent Analysis Under a Context Bound to Sequential Analysis
"... Abstract. This paper addresses the analysis of concurrent programs with shared memory. Such an analysis is undecidable in the presence of multiple procedures. One approach used in recent work obtains decidability by providing only a partial guarantee of correctness: the approach bounds the number of ..."
Abstract

Cited by 72 (12 self)
 Add to MetaCart
(Show Context)
Abstract. This paper addresses the analysis of concurrent programs with shared memory. Such an analysis is undecidable in the presence of multiple procedures. One approach used in recent work obtains decidability by providing only a partial guarantee of correctness: the approach bounds the number of context switches allowed in the concurrent program, and aims to prove safety, or find bugs, under the given bound. In this paper, we show how to obtain simple and efficient algorithms for the analysis of concurrent programs with a context bound. We give a general reduction from a concurrent program P, and a given context bound K, to a slightly larger sequential program P K s such that the analysis of P K s can be used to prove properties about P. The reduction introduces symbolic constants and assume statements in P K s. Thus, any sequential analysis that can deal with these two additions can be extended to handle concurrent programs as well, under the context bound. We give instances of the reduction for common program models used in model checking, such as Boolean programs, pushdown systems (PDSs), and symbolic PDSs. 1
A Practical Framework for DemandDriven Interprocedural Data Flow Analysis
 ACM Transactions on Programming Languages and Systems
, 1998
"... this article, we present a general framework for developing demanddriven interprocedural data flow analyzers and report our experience in evaluating the performance of this approach. A demand for data flow information is modeled as a set of queries. The framework includes a generic demanddriven al ..."
Abstract

Cited by 61 (10 self)
 Add to MetaCart
(Show Context)
this article, we present a general framework for developing demanddriven interprocedural data flow analyzers and report our experience in evaluating the performance of this approach. A demand for data flow information is modeled as a set of queries. The framework includes a generic demanddriven algorithm that determines the response to a query by iteratively applying a system of query propagation rules. The propagation rules yield precise responses for the class of distributive finite data flow problems. We also describe a twophase framework variation to accurately handle nondistributive problems. A performance evaluation of our demanddriven approach is presented for two data flow problems, namely, reachingdefinitions and copy constant propagation. Our experiments show that demanddriven analysis performs well in practice, reducing both time and space requirements when compared with exhaustive analysis.
A relational approach to interprocedural shape analysis
 In 11th SAS
, 2004
"... Abstract. This paper addresses the verification of properties of imperative programs withrecursive procedure calls, heapallocated storage, and destructive updating of pointervalued fieldsi.e., interprocedural shape analysis. It presents a way to harness some previouslyknown approaches to interpr ..."
Abstract

Cited by 55 (15 self)
 Add to MetaCart
(Show Context)
Abstract. This paper addresses the verification of properties of imperative programs withrecursive procedure calls, heapallocated storage, and destructive updating of pointervalued fieldsi.e., interprocedural shape analysis. It presents a way to harness some previouslyknown approaches to interprocedural dataflow analysiswhich in past work have been applied only to much less rich settingsfor interprocedural shape analysis. 1 Introduction This paper concerns techniques for static analysis of recursive programs that manipulateheapallocated storage and perform destructive updating of pointervalued fields. The goal is to recover shape descriptors that provide information about the characteristicsof the data structures that a program's pointer variables can point to. Such information can be used to help programmers understand certain aspects of the program's behavior,to verify properties of the program, and to optimize or parallelize the program.