Results 1  10
of
81
Interprocedural Pointer Alias Analysis
 ACM Transactions on Programming Languages and Systems
, 1999
"... this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions: ..."
Abstract

Cited by 100 (8 self)
 Add to MetaCart
this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions:
A Partial Evaluator for the Untyped Lambda Calculus
 Final Report of the NSF Workshop on Scientific Database Management. SIGMOD RECORD
, 1991
"... This article describes theoretical and practical aspects of an implemented selfapplicable partial evaluator for the untyped... ..."
Abstract

Cited by 93 (3 self)
 Add to MetaCart
This article describes theoretical and practical aspects of an implemented selfapplicable partial evaluator for the untyped...
FlowInsensitive Interprocedural Alias Analysis in the Presence of Pointers
"... . Dataflow analysis algorithms can be classified into two categories: flowsensitive and flowinsensitive. To improve efficiency, flowinsensitive interprocedural analyses do not make use of the intraprocedural control flow information associated with individual procedures. Since pointerinduced al ..."
Abstract

Cited by 67 (17 self)
 Add to MetaCart
. Dataflow analysis algorithms can be classified into two categories: flowsensitive and flowinsensitive. To improve efficiency, flowinsensitive interprocedural analyses do not make use of the intraprocedural control flow information associated with individual procedures. Since pointerinduced aliases can change within a procedure, applying known flowinsensitive analyses can result in either incorrect or overly conservative solutions. In this paper, we present a flowinsensitive dataflow analysis algorithm that computes interprocedural pointerinduced aliases. We improve the precision of our analysis by (1) making use of certain types of kill information that can be precomputed efficiently, and (2) computing aliases generated in each procedure instead of holding at the exit of each procedure. We improve the efficiency of our algorithm by introducing a technique called deferred evaluation. Interprocedural analyses, including alias analysis, rely upon the program call graph (PCG) fo...
From ML to Ada: Stronglytyped Language Interoperability via Source Translation
, 1993
"... We describe a system that supports sourcelevel integration of MLlike functional language code with ANSI C or Ada83 code. The system works by translating the functional code into typecorrect, "vanilla" C or Ada; it offers simple, efficient, typesafe interoperation between new functiona ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
We describe a system that supports sourcelevel integration of MLlike functional language code with ANSI C or Ada83 code. The system works by translating the functional code into typecorrect, "vanilla" C or Ada; it offers simple, efficient, typesafe interoperation between new functional code components and "legacy" thirdgenerationlanguage components. Our translator represents a novel synthesis of techniques including userparameterized specification of primitive types and operators; removal of polymorphism by code specialization; removal of higherorder functions using closure datatypes and interpretation; and aggressive optimization of the resulting firstorder code, which can be viewed as encoding the result of a closure analysis. Programs remain fully typed at every stage of the translation process, using only simple, standard type systems. Target code runs at speeds comparable to the output of current optimizing ML compilers, even though handicapped by a conservative garbage collector.
Closure Analysis in Constraint Form
 ACM Transactions on Programming Languages and Systems
, 1995
"... Interpretation Bondorf's definition can be simplified considerably. To see why, consider the second component of CMap(E) \Theta CEnv(E). This component is updated only in Closure Analysis in Constraint Form \Delta 9 b(E 1 @ i E 2 )¯ae and read only in b(x l )¯ae. The key observation is that ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
Interpretation Bondorf's definition can be simplified considerably. To see why, consider the second component of CMap(E) \Theta CEnv(E). This component is updated only in Closure Analysis in Constraint Form \Delta 9 b(E 1 @ i E 2 )¯ae and read only in b(x l )¯ae. The key observation is that both these operations can be done on the first component instead. Thus, we can omit the use of CEnv(E). By rewriting Bondorf's definition according to this observation, we arrive at the following definition. As with Bondorf's definition, we assume that all labels are distinct. Definition 2.3.1. We define m : (E : ) ! CMap(E) ! CMap(E) m(x l )¯ = ¯ m( l x:E)¯ = (m(E)¯) t h[[ l ]] 7! flgi m(E 1 @ i E 2 )¯ = (m(E 1 )¯) t (m(E 2 )¯) t F l2¯(var(E1 )) (h[[ l ]] 7! ¯(var(E 2 ))i t h[[@ i ]] 7! ¯(var(body(l)))i) . We can now do closure analysis of E by computing fix(m(E)). A key question is: is the simpler abstract interpretation equivalent to Bondorf's? We might attempt to prove this u...
The semantics of Scheme controlflow analysis
 School of Computer Science, Pittsburgh
, 1991
"... This is a followon to my 1988 PLDI paper, “ControlFlow Analysis in Scheme”[9]. I usethe methodof abstractsemantic interpretations to explicate the controlflow analysis technique presented in that paper. I begin with a denotational semantics for CPS Scheme. I then present an alternate semantics th ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
This is a followon to my 1988 PLDI paper, “ControlFlow Analysis in Scheme”[9]. I usethe methodof abstractsemantic interpretations to explicate the controlflow analysis technique presented in that paper. I begin with a denotational semantics for CPS Scheme. I then present an alternate semantics that precisely expresses the controlflow analysis problem. I abstract this semantics in a natural way, arriving at two different semantic interpretations giving approximate solutions to the flow analysis problem, each computable at compile time. The development of the final abstract semantics provides a clear, formal description of the analysis technique presented in “ControlFlow Analysis in Scheme.” 1
Global Tagging Optimization by Type Inference
 Proc. 1992 ACM Symposium on Lisp and Functional Programming
"... Tag handling accounts for a substantial amount of execution cost in latently typed languages such as Common LISP and Scheme, especially on architectures that provide no special hardware support. We present a tagging optimization algorithm based on type inference that is global: it traces tag informa ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
Tag handling accounts for a substantial amount of execution cost in latently typed languages such as Common LISP and Scheme, especially on architectures that provide no special hardware support. We present a tagging optimization algorithm based on type inference that is global: it traces tag information across procedure boundaries, not only within procedures; efficient: it runs asymptotically in almostlinear time with excellent practical runtime behavior (e.g. 5,000 line Scheme programs are processed in a matter of seconds); useful: it eliminates at compiletime between 60 and 95 % of tag handling operations in nonnumerical Scheme code (based on preliminary data); structural: it traces tag information in higher order (procedure) values and especially in structured (e.g. list) values, where reportedly 80 % of tag handling operations take place; wellfounded: it is based on a formal static typing discpline with a special type Dynamic that has a robust and semantically sound “minimal typing ” property; implementationindependent: no tag implementation technology is presupposed; the results are displayed as an explicitly typed source program and can be interfaced with compiler backends of statically typed languages such as Standard ML; userfriendly: no annotations by the programmer are necessary; it operates on the program source, provides useful type information to a programmer in the spirit of ML’s type system, and makes all tag handling operations necessary at runtime explicit (and thus shows which ones can be eliminated without endangering correctness of program execution). This agenda is accomplished by: • maintaining and tracing only a minimum of information — no sets of abstract closures or cons points etc. that may reach a program point are kept, only their collective tagging information; no repeated analysis of program points is performed;
Efficient analyses for realistic offline partial evaluation
 Journal of Functional Programming
, 1993
"... Based on Henglein’s efficient bindingtime analysis for the lambda calculus (with constants and “fix”) [Hen91], we develop four efficient analyses for use in the preprocessing phase of Similix, a selfapplicable partial evaluator for a higherorder subset of Scheme. The analyses developed in this pa ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
Based on Henglein’s efficient bindingtime analysis for the lambda calculus (with constants and “fix”) [Hen91], we develop four efficient analyses for use in the preprocessing phase of Similix, a selfapplicable partial evaluator for a higherorder subset of Scheme. The analyses developed in this paper are almostlinear in the size of the analysed program. (1) A flow analysis determines possible value flow between lambdaabstractions and function applications and between constructor applications and selector/predicate applications. The flow analysis is not particularly biased towards partial evaluation; the analysis corresponds to the closure analysis of [Bon91b]. (2) A (monovariant) bindingtime analysis distinguishes static from dynamic values; the analysis treats both higherorder functions and partially static data structures. (3) A new isused analysis, not present in [Bon91b], finds a nonminimal bindingtime annotation which is “safe ” in a certain way: a firstorder value may only become static if its result is “needed ” during specialization; this “poor man’s generalization ” [Hol88] increases termination of specialization. (4) Finally, an evaluationorder dependency analysis ensures that the order of sideeffects is preserved in the residual program. The four analyses are performed
ConstraintBased Type Inference and Parametric Polymorphism
 Proc. of the 1st International Static Analysis Symposium, volume 864 of LNCS
, 1994
"... Abstract. Constraintbased analysis is a technique for inferring implementation types. Traditionally it has been described using mathematical formalisms. We explain it in a different and more intuitive way as a flow problem. The intuition is facilitated by a direct correspondence between runtime an ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Abstract. Constraintbased analysis is a technique for inferring implementation types. Traditionally it has been described using mathematical formalisms. We explain it in a different and more intuitive way as a flow problem. The intuition is facilitated by a direct correspondence between runtime and analysistime concepts. Precise analysis of polymorphism is hard; several algorithms have been developed to cope with it. Focusing on parametric polymorphism and using the flow perspective, we analyze and compare these algorithms, for the first time directly characterizing when they succeed and fail. Our study of the algorithms lead us to two conclusions. First, designing an algorithm that is either efficient or precise is easy, but designing an algorithm that is efficient and precise is hard. Second, to achieve efficiency and precision simultaneously, the analysis effort must be actively guided towards the areas of the program with the highest payoff. We define a general class of algorithms that do this: the adaptive algorithms. The two most powerful of the five algorithms we study fall in this class. 1