Results 1 
9 of
9
Pointer Analysis for Multithreaded Programs
 ACM SIGPLAN 99
, 1999
"... This paper presents a novel interprocedural, flowsensitive, and contextsensitive pointer analysis algorithm for multithreaded programs that may concurrently update shared pointers. For each pointer and each program point, the algorithm computes a conservative approximation of the memory locations ..."
Abstract

Cited by 142 (13 self)
 Add to MetaCart
This paper presents a novel interprocedural, flowsensitive, and contextsensitive pointer analysis algorithm for multithreaded programs that may concurrently update shared pointers. For each pointer and each program point, the algorithm computes a conservative approximation of the memory locations to which that pointer may point. The algorithm correctly handles a full range of constructs in multithreaded programs, including recursive functions, function pointers, structures, arrays, nested structures and arrays, pointer arithmetic, casts between pointer variables of different types, heap and stack allocated memory, shared global variables, and threadprivate global variables. We have implemented the algorithm in the SUIF compiler system and used the implementation to analyze a sizable set of multithreaded programs written in the Cilk multithreaded programming language. Our experimental results show that the analysis has good precision and converges quickly for our set of Cilk programs.
Interprocedural Pointer Alias Analysis
 ACM Transactions on Programming Languages and Systems
, 1999
"... this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions: ..."
Abstract

Cited by 107 (8 self)
 Add to MetaCart
this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions:
Parallelism for Free: Efficient and Optimal Bitvector Analyses for Parallel Programs
, 1994
"... In this paper we show how to construct optimal bitvector analysis algorithms for parallel programs with shared memory that are as efficient as their purely sequential counterparts, and which can easily be implemented. Whereas the complexity result is rather obvious, our optimality result is a conseq ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
In this paper we show how to construct optimal bitvector analysis algorithms for parallel programs with shared memory that are as efficient as their purely sequential counterparts, and which can easily be implemented. Whereas the complexity result is rather obvious, our optimality result is a consequence of a new Kam/Ullmanstyle Coincidence Theorem. Thus, the important merits of sequential bitvector analyses survive the introduction of parallel statements. Keywords Parallelism, interleaving semantics, synchronization, program optimization, data flow analysis, bitvector problems, definitionuse chains, partial redundancy elimination, partial dead code elimination. Contents 1 Motivation 1 2 Sequential Programs 2 2.1 Representation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2.2 Data Flow Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2.2.1 The MOPSolution of a DFA : : : : : : : : : : : : : : : : : : : : : : 2 2.2.2 The MFPSolution o...
Semantic Analysis of SharedMemory Concurrent Languages using Abstract ModelChecking
, 1995
"... ModelChecking R'egis Cridlig Laboratoire d'Informatique de l'Ecole Normale Sup'erieure (URA CNRS 1327) Abstract In this article we present a trueconcurrent operational semantics of a Pascallike language with a parallel operator and shared memory. This semantics is based o ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
ModelChecking R'egis Cridlig Laboratoire d'Informatique de l'Ecole Normale Sup'erieure (URA CNRS 1327) Abstract In this article we present a trueconcurrent operational semantics of a Pascallike language with a parallel operator and shared memory. This semantics is based on a higherdimensional transition system that is able to model the asynchronous execution of concurrent operations. We show how it can be usefully abstracted to finite automata via abstract interpretation using folding of states and appropriate widening operators. Then we compute static properties relevant to the standard concurrent execution of the program by means of modelchecking on the abstract automata that were previously derived; for instance, approximations of the values of shared variables and temporal properties about standard execution paths can be obtained effectively with a high degree of accuracy. 1 Introduction In the area of static analysis of concurrent programs, that is the effective computat...
Data Flow Analysis of Parallel Programs
, 1995
"... Data flow analysis is the prerequisite of performing optimizations such as code motion of partial redundant expressions on imperative sequential programs. To apply these transformations to parallel imperative programs, the notion of data flow must be extended to concurrent programs. The additional ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Data flow analysis is the prerequisite of performing optimizations such as code motion of partial redundant expressions on imperative sequential programs. To apply these transformations to parallel imperative programs, the notion of data flow must be extended to concurrent programs. The additional parallel source language features are: shared memory and nested parallel statements (PAR). The underlying interleaving semantics of the concurrentlyexecuted processes result in the socalled state space explosion which on first appearance prevents the computation of the meet over all path solution needed for data flow analysis. For the class of bitvector data flow problems we can show that for the computation of the meet over all path solution, not all interleavings are needed. Based on that, we can give simple data flow equations representing the data flow effects of the PAR statement. The definition of a parallel control flow graph leads to an efficient extension of Killdal 's algorithm ...
Constant Propagation in Explicitly Parallel Programs
 In Proceedings of EuroPar, LNCS 1470
, 1998
"... Constant propagation (CP) is a powerful, practically relevant optimization of sequential programs. However, only few approaches have been proposed aiming at making CP available for parallel programs. In fact, because of the computational complexity paraphrased by the catchphrase "state explosi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Constant propagation (CP) is a powerful, practically relevant optimization of sequential programs. However, only few approaches have been proposed aiming at making CP available for parallel programs. In fact, because of the computational complexity paraphrased by the catchphrase "state explosion problem," the successful transfer of sequential techniques is currently essentially restricted to bitvectorbased optimizations. Because of their structural simplicity they can be enhanced to parallel programs at almost no costs on the implementation and computation side. CP, however, is beyond this class. Here, however, we present a powerful algorithm for constant propagation in parallel programs, which is based on an extension of the framework underlying the successful transfer of bitvector problems, and which can be implemented as easily and as efficiently as its sequential counterpart for simple constants computed by stateoftheart sequential optimizers. Keywords (Explicit) parallelism...
Optimal Code Motion for Parallel Programs
, 1995
"... Code motion is wellknown as a powerful technique for the optimization of sequential programs. It improves the runtime efficiency by avoiding unnecessary recomputations of values, and it is even possible to obtain computationally optimal results, i.e., results where no program path can be improved ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Code motion is wellknown as a powerful technique for the optimization of sequential programs. It improves the runtime efficiency by avoiding unnecessary recomputations of values, and it is even possible to obtain computationally optimal results, i.e., results where no program path can be improved any further by means of semantics preserving code motion. In this paper we present a code motion algorithm that for the first time achieves this optimality result for parallel programs. Fundamental is the framework of [KSV1] showing how to perform optimal bitvector analyses for parallel programs as easily and as efficiently as for sequential ones. Moreover, the analyses can easily be adapted from their sequential counterparts. This is demonstrated here by constructing a computationally optimal code motion algorithm for parallel programs by systematically extending its counterpart for sequential programs, the busy code motion transformation of [KRS1, KRS2]. Keywords Parallelism, interleaving...
Semantic Analysis of a Concurrent Pascal (Extended Abstract)
"... In this article we present a trueconcurrent operational semantics of a Pascallike language with a parallel operator and shared memory. This semantics is based on a higherdimensional transition system that can model the asynchronous execution of concurrent operations. We show how it can be usefully ..."
Abstract
 Add to MetaCart
In this article we present a trueconcurrent operational semantics of a Pascallike language with a parallel operator and shared memory. This semantics is based on a higherdimensional transition system that can model the asynchronous execution of concurrent operations. We show how it can be usefully abstracted to finite automata via abstract interpretation using folding of states and appropriate widening operators. Then we compute static properties about the standard concurrent execution of the program by means of modelchecking on the abstract automata that were previously derived: for instance,approximations of the values of shared variables and temporal properties about standard execution paths can be computed effectively. 1 Introduction In the area of static analysis of concurrent programs, that is effectively computing an approximate description of their execution for the purpose of verification or optimization, much work remains to be done: there is not (not yet?) any standard m...