Results 11  20
of
39
Demand Interprocedural Program Analysis Using Logic Databases
, 1994
"... This paper describes how algorithms for demand versions of inerprocedural programanalysis problems can be obtained from their exhaustive counterparts essentially for free, by applying the socalled magicsets transformation that was developed in the logicprogramming and deductivedatabase communiti ..."
Abstract

Cited by 54 (8 self)
 Add to MetaCart
(Show Context)
This paper describes how algorithms for demand versions of inerprocedural programanalysis problems can be obtained from their exhaustive counterparts essentially for free, by applying the socalled magicsets transformation that was developed in the logicprogramming and deductivedatabase communities. Applications to interprocedural dataflow analysis and interprocedural program slicing are described. 1 1 INTRODUCTION Interprocedural analysis concerns the static examination of a program that consists of multiple procedures. Its purpose is to determine certain kinds of summary information associated with the elements of a program (such as reaching definitions, available expressions, live variables, etc.). Most treatments of interprocedural analysis address the exhaustive version of the problem: summary information is to be reported for all elements of the program. This paper concerns the solution of demand versions of interprocedural analysis problems: summary information is to be rep...
Solving demand versions of interprocedural analysis problems
 in Proceedings of the Fifth International Conference on Compiler Construction
"... This paper concerns the solution of demand versions of interprocedural analysis problems. In a demand version of a programanalysis problem, some piece of summary information (e.g., the dataflow facts holding at a given point) is to be reported only for a single program element of interest (or a sma ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
(Show Context)
This paper concerns the solution of demand versions of interprocedural analysis problems. In a demand version of a programanalysis problem, some piece of summary information (e.g., the dataflow facts holding at a given point) is to be reported only for a single program element of interest (or a small number of elements of interest). Because the summary information at one program point typically depends on summary information from other points, an important issue is to minimize the number of other points for which (transient) summary information is computed and/or the amount of information computed at those points. The paper describes how algorithms for demand versions of programanalysis problems can be obtained from their exhaustive counterparts essentially for free, by applying the socalled “magicsets ” transformation that was developed in the logicprogramming and deductivedatabase communities. 1.
Analysis and transformation in the ParaScope Editor
 IN PROCEEDINGS OF THE 1991 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING
, 1991
"... The ParaScope Editor is a new kind of interactive parallel programming tool for developing scientific Fortran programs. It assists the knowledgeable user by displaying the results of sophisticated program analyses and by providing editing and a set of powerful interactive transformations. After an e ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
The ParaScope Editor is a new kind of interactive parallel programming tool for developing scientific Fortran programs. It assists the knowledgeable user by displaying the results of sophisticated program analyses and by providing editing and a set of powerful interactive transformations. After an edit or parallelismenhancing transformation, the ParaScope Editor incrementally updates both the analyses and source quickly. This paper describes the underlying implementation of the ParaScope Editor, paying particular attention to the analysis and representation of dependence information and its reconstruction after changes to the program. 1 Introduction The ParaScope Editor is a tool designed to help skilled users interactively transform a sequential Fortran 77 program into a parallel program with explicit parallel constructs, such as those in PCF Fortran [40]. In a language like PCF Fortran, the principal mechanism for the introduction of parallelism is the parallel loop, which specifi...
Reducing the Cost of Data Flow Analysis By Congruence Partitioning
 In International Conference on Compiler Construction
, 1994
"... . Data flow analysis expresses the solution of an information gathering problem as the fixed point of a system of monotone equations. This paper presents a technique to improve the performance of data flow analysis by systematically reducing the size of the equation system in any monotone data flow ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
. Data flow analysis expresses the solution of an information gathering problem as the fixed point of a system of monotone equations. This paper presents a technique to improve the performance of data flow analysis by systematically reducing the size of the equation system in any monotone data flow problem. Reductions result from partitioning the equations in the system according to congruence relations. We present a fast O(n log n) partitioning algorithm, where n is the size of the program, that exploits known algebraic properties in equation systems. From the resulting partition a reduced equation system is constructed that is minimized with respect to the computed congruence relation while still providing the data flow solution at all program points. 1 Introduction Along with the growing importance of static data flow analysis in current optimizing and parallelizing compilers comes an increased concern about the high time and space requirements of solving data flow problems. Experi...
Lattice Frameworks for Multisource and Bidirectional Data Flow Problems
 ACM Transactions on Programming Languages and Systems
, 1995
"... this paper is to provide a natural data flow framework encoding for multisource problems, called the ktupleframework. Ktuple frameworks provide a unifying approach for several important classes of problems. The set of multisource problems includes the bidirectional problems, in which the informati ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
this paper is to provide a natural data flow framework encoding for multisource problems, called the ktupleframework. Ktuple frameworks provide a unifying approach for several important classes of problems. The set of multisource problems includes the bidirectional problems, in which the information depends upon both control predecessors and successors
A framework for partial data flow analysis
 Proceedings IEEE International Conference on Software Maintenance (ICSM
, 1994
"... Although data pow analysis was first developed for use in compilers, its usefulness is now recognized in many software tools. Because of its compiler origins, the computation of data pow for software tools is based on the traditional exhaustive data flow framework. However, although this framework i ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
Although data pow analysis was first developed for use in compilers, its usefulness is now recognized in many software tools. Because of its compiler origins, the computation of data pow for software tools is based on the traditional exhaustive data flow framework. However, although this framework is useful for computing data pow for compilers, it is not the most appropriate for sofsware tools, particularly those used in the maintenance stage. In maintenance, testing and debugging is typically performed in response to program changes. As such, the data pow required is demand driven from the changed program points. Rather than compute the data flow exhaustively using the traditional data flow framework, we present a framework for partial analysis. The framework includes a specification language enabling the specification of the demand driven data flow desired by a user. From the specification, a partial analysis algorithm is automatically generated using an Lattributed definition for the grammar of the specification language. A specification of a demand driven data pow problem expresses characteristics that define the kind of traversal needed in the partial analysis and the type of dependencies to be captured. The partial analyses algorithms are eficient in that only as much of the program is analyzed as actually needed, thus reducing the time and space requirements over exhaustively computing the data flow information. The algorithms are shown to be useful when debugging and testing programs during maintenance. Keywords control pow graph (CFG), program debugging, program testing, code optimization. 1
Incremental analysis of side effects for C software systems
 PROCEEDINGS OF THE NINETEENTH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING
, 1997
"... Incremental static analysis seeks to efficiently update semantic information about an evolving software system, without recomputing "from scratch." Interprocedural modification side effect analysis (MOD) calculates the set of variables possibly modified by execution of a procedure or a sta ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Incremental static analysis seeks to efficiently update semantic information about an evolving software system, without recomputing "from scratch." Interprocedural modification side effect analysis (MOD) calculates the set of variables possibly modified by execution of a procedure or a statement. We introduce a partial incrementalization of MOD for C systems using the hybrid method and present results of a study of 27 C programs, that predicts that our incremental MOD analysis will be substantially cheaper than exhaustive analysis for many program changes.
A Comprehensive Approach to Parallel Data Flow Analysis
 In Int. Conf. Supercomputing
, 1992
"... We present a comprehensive approach to performing data flow analysis in parallel. We identify three types of parallelism inherent in the data flow solution process: independentproblem parallelism, separateunit parallelism and algorithmic parallelism; and describe a unified framework to exploit the ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We present a comprehensive approach to performing data flow analysis in parallel. We identify three types of parallelism inherent in the data flow solution process: independentproblem parallelism, separateunit parallelism and algorithmic parallelism; and describe a unified framework to exploit them. Our investigations of typical Fortran programs reveal an abundance of the last two types of parallelism. In particular, we illustrate the exploitation of algorithmic parallelism in the design of our parallel hybrid data flow analysis algorithms. We report on the empirical performance of the parallel hybrid algorithm for the Reaching Definitions problem and the structural characteristics of the program flow graphs that affect algorithm performance. Keywords. Data flow analysis, parallel algorithms, parallel data flow analysis. 1 Introduction 1.1 Motivation Data flow analysis is a compiletime analysis technique that gathers information about the flow of data in the program. Data flow i...
A Reference Chain Approach for Live Variables
 TECH. REP. CSE 94029, OREGON GRADUATE INSTITUTE. APR
, 1994
"... The classical dataflow approach to determine the set of variables that are live at some point in a program entails using an iterative algorithm across the entire program, usually with costly bitvector data structures. Methods based on sparse evaluation graphs exist, but these are solved on a per ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The classical dataflow approach to determine the set of variables that are live at some point in a program entails using an iterative algorithm across the entire program, usually with costly bitvector data structures. Methods based on sparse evaluation graphs exist, but these are solved on a pervariable basis and are also iterative. Inspired by our group's interest in sparse representations of usedef and defuse reference chains, we are investigating still another approach. In this paper, we present an algorithm for determining liveness of variables based on upwards exposed use information at control flow branch points, analogous to the collection of reaching definition information in the popular Static Single Assignment form. We demonstrate the use of our technique with two applications, building interference graphs and eliminating dead code, and show that this new reference chain approach has important advantages over previous methods.