Results 1  10
of
21
Closure Analysis in Constraint Form
 ACM Transactions on Programming Languages and Systems
, 1995
"... Interpretation Bondorf's definition can be simplified considerably. To see why, consider the second component of CMap(E) \Theta CEnv(E). This component is updated only in Closure Analysis in Constraint Form \Delta 9 b(E 1 @ i E 2 )¯ae and read only in b(x l )¯ae. The key observation is that both ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
Interpretation Bondorf's definition can be simplified considerably. To see why, consider the second component of CMap(E) \Theta CEnv(E). This component is updated only in Closure Analysis in Constraint Form \Delta 9 b(E 1 @ i E 2 )¯ae and read only in b(x l )¯ae. The key observation is that both these operations can be done on the first component instead. Thus, we can omit the use of CEnv(E). By rewriting Bondorf's definition according to this observation, we arrive at the following definition. As with Bondorf's definition, we assume that all labels are distinct. Definition 2.3.1. We define m : (E : ) ! CMap(E) ! CMap(E) m(x l )¯ = ¯ m( l x:E)¯ = (m(E)¯) t h[[ l ]] 7! flgi m(E 1 @ i E 2 )¯ = (m(E 1 )¯) t (m(E 2 )¯) t F l2¯(var(E1 )) (h[[ l ]] 7! ¯(var(E 2 ))i t h[[@ i ]] 7! ¯(var(body(l)))i) . We can now do closure analysis of E by computing fix(m(E)). A key question is: is the simpler abstract interpretation equivalent to Bondorf's? We might attempt to prove this u...
A modular, polyvariant, and typebased closure analysis
 In ICFP ’97 [ICFP97
"... We observe that the principal typing property of a type system is the enabling technology for modularity and separate compilation [10]. We use this technology to formulate a modular and polyvariant closure analysis, based on the rank 2 intersection types annotated with controlflow information. Modu ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
We observe that the principal typing property of a type system is the enabling technology for modularity and separate compilation [10]. We use this technology to formulate a modular and polyvariant closure analysis, based on the rank 2 intersection types annotated with controlflow information. Modularity manifests itself in a syntaxdirected, annotatedtype inference algorithm that can analyse program fragments containing free variables: a principal typing property is used to formalise it. Polyvariance manifests itself in the separation of different behaviours of the same function at its different uses: this is formalised via the rank 2 intersection types. As the rank 2 intersection type discipline types at least all (core) ML programs, our analysis can be used in the separate compilation of such programs. 1
ControlFlow Analysis and Type Systems
, 1995
"... . We establish a series of equivalences between type systems and controlflow analyses. Specifically, we take four type systems from the literature (involving simple types, subtypes and recursion) and conservatively extend them to reason about controlflow information. Similarly, we take four standa ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
. We establish a series of equivalences between type systems and controlflow analyses. Specifically, we take four type systems from the literature (involving simple types, subtypes and recursion) and conservatively extend them to reason about controlflow information. Similarly, we take four standard controlflow systems and conservatively extend them to reason about type consistency. Our main result is that we can match up the resulting type and controlflow systems such that we obtain pairs of equivalent systems, where the equivalence is with respect to both type and controlflow information. In essence, type systems and controlflow analysis can be viewed as complementary approaches for addressing questions of type consistency and controlflow. Recent and independent work by Palsberg and O'Keefe has addressed the same general question. Our work differs from theirs in two respects. First, they only consider what happens when controlflow systems are used to reason about types. In co...
Lineartime Subtransitive Control Flow Analysis
, 1997
"... We present a lineartime algorithm for boundedtype programs that builds a directed graph whose transitive closure gives exactly the results of the standard (cubictime) ControlFlow Analysis (CFA) algorithm. Our algorithm can be used to list all functions calls from all call sites in (optimal) quadr ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
We present a lineartime algorithm for boundedtype programs that builds a directed graph whose transitive closure gives exactly the results of the standard (cubictime) ControlFlow Analysis (CFA) algorithm. Our algorithm can be used to list all functions calls from all call sites in (optimal) quadratic time. More importantly, it can be used to give lineartime algorithms for CFAconsuming applications such as: ffl effects analysis: find the sideeffecting expressions in a program. ffl klimited CFA: for each callsite, list the functions if there are only a few of them ( k) and otherwise output "many". ffl calledonce analysis: identify all functions called from only one callsite. 1 Introduction The controlflow graph of a program plays a central role in compilation  it identifies the block and loop structure in a program, a prerequisite for many code optimizations. For firstorder languages, this graph can be directly constructed from a program because information about flow of ...
Safety Analysis versus Type Inference
 INFORMATION AND COMPUTATION
, 1995
"... Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different pe ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different perspectives, however. Safety analysis is global in that it can only analyze a complete program. In contrast, type inference is local in that it can analyze pieces of a program in isolation. In this paper we prove that safety analysis is sound , relative to both a strict and a lazy operational semantics. We also prove that safety analysis accepts strictly more safe lambda terms than does type inference for simple types. The latter result demonstrates that global program analyses can be more precise than local ones.
On the Cubic Bottleneck in Subtyping and Flow Analysis
, 1997
"... A variety of program analysis methods have worst case time complexity that grows cubicly in the length of the program being analyzed. Cubic complexity typically arises in control flow analyses and the inference of recursive types (including object types). It is often said that such cubic performance ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
A variety of program analysis methods have worst case time complexity that grows cubicly in the length of the program being analyzed. Cubic complexity typically arises in control flow analyses and the inference of recursive types (including object types). It is often said that such cubic performance can not be improved because these analyses require "dynamic transitive closure". Here we prove linear time reductions from the problem of determining membership for languages defined by 2way nondeterministic pushdown automata (2NPDA) to problems of flow analysis and typability in the AmadioCardelli type system. An O(n 3 ) algorithm was given for 2NPDA acceptability in 1968 and is still the best known. The reductions are factored through the problem of "monotone closure" and we propose linear time reduction of the monotone closure as a method of establishing "monotone closure hardness" for program analysis problems. A subcubic procedure for a monotone closure hard problem would imply a ...
Safety Analysis versus Type Inference for Partial Types
 Information Processing Letters
, 1992
"... Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. We prove that safety analysis accepts strictly more safe lambda terms than does type inference for Thatte's partial types. 1 Introduc ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. We prove that safety analysis accepts strictly more safe lambda terms than does type inference for Thatte's partial types. 1 Introduction We will compare two techniques for analyzing the safety of terms in an untyped lambda calculus with constants, see figure 1. The safety we are concerned with is the absence of those runtime errors that arise from the misuse of constants. In this paper we consider just the two constants 0 and succ. They can be misused either by applying a number to an argument, or by applying succ to an abstraction. Safety is undecidable so any analysis algorithm must reject some safe programs. E ::= x j x:E j E 1 E 2 j 0 j succ E Figure 1: The lambda calculus. One way of achieving a safety guarantee is to perform type inference (TI), because "welltyped programs cannot go wrong". Two examples of type ...
Constraint Systems for Useless Variable Elimination
, 1998
"... A useless variable is one whose value contributes nothing to the final outcome of a computation. Such variables are unlikely to occur in humanproduced code, but may be introduced by various program transformations. We would like to eliminate useless parameters from procedures and eliminate the corr ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A useless variable is one whose value contributes nothing to the final outcome of a computation. Such variables are unlikely to occur in humanproduced code, but may be introduced by various program transformations. We would like to eliminate useless parameters from procedures and eliminate the corresponding actual parameters from their call sites. This transformation is the extension to higherorder programming of a variety of deadcode elimination optimizations that are important in compilers for firstorder imperative languages. Shivers has presented such a transformation. We reformulate the transformation and prove its correctness. We believe that this correctness proof can be a model for proofs of other analysisbased transformations. We proceed as follows: ffl We reformulate Shivers' analysis as a set of constraints; since the constraints are conditional inclusions, they can be solved using standard algorithms. ffl We prove that any solution to the constraints is sound: that tw...
A New Approach to Control Flow Analysis
 Lecture
, 1998
"... We develop a control flow analysis algorithm for PCF based on game semantics. The analysis is closely related to Shivers' 0CFA analysis and the algorithm is shown to be cubic. The game semantics basis for the algorithm means that it can be naturally extended to handle strict languages and languages ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We develop a control flow analysis algorithm for PCF based on game semantics. The analysis is closely related to Shivers' 0CFA analysis and the algorithm is shown to be cubic. The game semantics basis for the algorithm means that it can be naturally extended to handle strict languages and languages with imperative features. These extensions are discussed in the paper. We sketch the correctness proof for the algorithm. We also illustrate an algorithm for computing klimited CFA.
Set Constraints for Destructive Array Update Optimization
, 1999
"... Destructive array update optimization is critical for writing scientific codes in functional languages. We present set constraints for an interprocedural update optimization that runs in polynomial time. This is a multipass optimization, involving interprocedural flow analyses for aliasing and live ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Destructive array update optimization is critical for writing scientific codes in functional languages. We present set constraints for an interprocedural update optimization that runs in polynomial time. This is a multipass optimization, involving interprocedural flow analyses for aliasing and liveness. We characterize and prove the soundness of these analyses using smallstep operational semantics. We also prove that any sound liveness analysis induces a correct program transformation.