Results 1  10
of
24
Static Prediction of Heap Space Usage for Firstorder Functional Programs
 in Symposium on Principles of Programming Languages (POPL’03
, 2003
"... Categories and Subject Descriptors We show how to efficiently obtain linear a priori bounds on the heap space consumption of firstorder functional programs. The analysis takes space reuse by explicit deallocation into account and also furnishes an upper bound on the heap usage in the presence of ga ..."
Abstract

Cited by 175 (31 self)
 Add to MetaCart
(Show Context)
Categories and Subject Descriptors We show how to efficiently obtain linear a priori bounds on the heap space consumption of firstorder functional programs. The analysis takes space reuse by explicit deallocation into account and also furnishes an upper bound on the heap usage in the presence of garbage collection. It covers a wide variety of examples including, for instance, the familiar sorting algorithms for lists, including quicksort. The analysis relies on a type system with resource annotations. Linear programming (LP) is used to automatically infer derivations in this enriched type system. We also show that integral solutions to the linear programs derived correspond to programs that can be evaluated without any operating system support for memory management. The particular integer linear programs arising in this way are shown to be feasibly solvable under mild assumptions.
Cost analysis of java bytecode
 16th European Symposium on Programming, ESOP’07, Lecture Notes in Computer Science
, 2007
"... Abstract. Cost analysis of Java bytecode is complicated by its unstructured control flow, the use of an operand stack and its objectoriented programming features (like dynamic dispatching). This paper addresses these problems and develops a generic framework for the automatic cost analysis of sequ ..."
Abstract

Cited by 77 (33 self)
 Add to MetaCart
(Show Context)
Abstract. Cost analysis of Java bytecode is complicated by its unstructured control flow, the use of an operand stack and its objectoriented programming features (like dynamic dispatching). This paper addresses these problems and develops a generic framework for the automatic cost analysis of sequential Java bytecode. Our method generates cost relations which define at compiletime the cost of programs as a function of their input data size. To the best of our knowledge, this is the first approach to the automatic cost analysis of Java bytecode. 1
Automatic Inference of Upper Bounds for Recurrence Relations in Cost Analysis
 In SAS, LNCS
"... Abstract. The classical approach to automatic cost analysis consists of two phases. Given a program and some measure of cost, we first produce recurrence relations (RRs) which capture the cost of our program in terms of the size of its input data. Second, we convert such RRs into closed form (i.e., ..."
Abstract

Cited by 42 (11 self)
 Add to MetaCart
(Show Context)
Abstract. The classical approach to automatic cost analysis consists of two phases. Given a program and some measure of cost, we first produce recurrence relations (RRs) which capture the cost of our program in terms of the size of its input data. Second, we convert such RRs into closed form (i.e., without recurrences). Whereas the first phase has received considerable attention, with a number of cost analyses available for a variety of programming languages, the second phase has received comparatively little attention. In this paper we first study the features of RRs generated by automatic cost analysis and discuss why existing computer algebra systems are not appropriate for automatically obtaining closed form solutions nor upper bounds of them. Then we present, to our knowledge, the first practical framework for the fully automatic generation of reasonably accurate upper bounds of RRs originating from cost analysis of a wide range of programs. It is based on the inference of ranking functions and loop invariants and on partial evaluation. 1
TypeBased Amortised HeapSpace Analysis
 In ESOP 2006, LNCS 3924
, 2006
"... Abstract. We present a type system for a compiletime analysis of heapspace requirements of Java style objectoriented programs with explicit deallocation. Our system is based on an amortised complexity analysis: the data is arbitrarily assigned a potential related to its size and layout; allocation ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
Abstract. We present a type system for a compiletime analysis of heapspace requirements of Java style objectoriented programs with explicit deallocation. Our system is based on an amortised complexity analysis: the data is arbitrarily assigned a potential related to its size and layout; allocations must be ”payed for ” from this potential. The potential of each input then furnishes an upper bound on the heap space usage for the computation on this input. We successfully treat inheritance, downcast, update and aliasing. Example applications for the analysis include destinationpassing style and doublylinked lists. Type inference is explicitly not included; the contribution lies in the system elides most technical lemmas and proofs, even nontrivial ones, due to space limitations. A full version is available at the authors ’ web pages. 1
Static Determination of Quantitative Resource Usage for HigherOrder Programs
 IN: 37TH ACM SYMP. ON PRINCIPLES OF PROG. LANGS
, 2010
"... We describe a new automatic static analysis for determining upperbound functions on the use of quantitative resources for strict, higherorder, polymorphic, recursive programs dealing with possiblyaliased data. Our analysis is a variant of Tarjan’s manual amortised cost analysis technique. We use ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
(Show Context)
We describe a new automatic static analysis for determining upperbound functions on the use of quantitative resources for strict, higherorder, polymorphic, recursive programs dealing with possiblyaliased data. Our analysis is a variant of Tarjan’s manual amortised cost analysis technique. We use a typebased approach, exploiting linearity to allow inference, and place a new emphasis on the number of references to a data object. The bounds we infer depend on the sizes of the various inputs to a program. They thus expose the impact of specific inputs on the overall cost behaviour. The key novel aspect of our work is that it deals directly with polymorphic higherorder functions without requiring sourcelevel transformations that could alter resource usage. We thus obtain safe and accurate compiletime bounds. Our work is generic in that it deals with a variety of quantitative resources. We illustrate our approach with reference to dynamic memory allocations/deallocations, stack usage, and worstcase execution time, using metrics taken from a real implementation on a simple microcontroller platform that is used in safetycritical automotive applications.
Optimized Live Heap Bound Analysis
 In VMCAI 03, volume 2575 of LNCS
, 2001
"... This paper describes a general approach for optimized live heap space and live heap spacebound analyses for garbagecollected languages. ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
This paper describes a general approach for optimized live heap space and live heap spacebound analyses for garbagecollected languages.
Automatic accurate stack space and heap space analysis for highlevel languages
, 2000
"... This paper describes a general approach for automatic and accurate space and spacebound analyses for highlevel languages, considering stack space, heap allocation and live heap space usage of programs. The approach is based on program analysis and transformations and is fully automatic. The analys ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
This paper describes a general approach for automatic and accurate space and spacebound analyses for highlevel languages, considering stack space, heap allocation and live heap space usage of programs. The approach is based on program analysis and transformations and is fully automatic. The analyses produce accurate upper bounds in the presence of partially known input structures. The analyses have been implemented, and experimental results con rm the accuracy. 1
UserDefinable Resource Usage Bounds Analysis for Java Bytecode
 BYTECODE 2009
, 2009
"... Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of userlevel properties (including for mobile c ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of userlevel properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually applicationdependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a database, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmerdefinable resources. In our context, a resource is defined by programmerprovided annotations which state the basic consumption that certain program elements make of that resource. From these definitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering a significant set of interesting resources.
Strengthening invariants for efficient computation
 in Conference Record of the 23rd Annual ACM Symposium on Principles of Programming Languages
, 2001
"... This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with p ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into ecient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach include strength reduction in optimizing compilers and finite differencing in transformational programming.
Towards Execution Time Estimation in Abstract MachineBased Languages
, 2008
"... Abstract machines provide a certain separation between platformdependent and platformindependent concerns in compilation. Many of the differences between architectures are encapsulated in the specific abstract machine implementation and the bytecode is left largely architecture independent. Taking ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract machines provide a certain separation between platformdependent and platformindependent concerns in compilation. Many of the differences between architectures are encapsulated in the specific abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecodebased abstract machine. Our approach includes a onetime, programindependent profiling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compiletime cost estimation phase, using the instruction timing information, infers expressions giving platformdependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account lowlevel issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/verification of time properties, certification of time properties in mobile code, granularity control in parallel/distributed computing, and resourceoriented specialization.