Results 1  10
of
42
Multivariate Amortized Resource Analysis
, 2010
"... We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. ..."
Abstract

Cited by 47 (8 self)
 Add to MetaCart
(Show Context)
We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. In this paper we generalize this to arbitrary multivariate polynomial functions thus allowing bounds of the form mn which had to be grossly overestimated by m 2 + n 2 before. Our framework even encompasses bounds like ∑ i,j≤n mimj where the mi are the sizes of the entries of a list of length n. This allows us for the first time to derive useful resource bounds for operations on matrices that are represented as lists of lists and to considerably improve bounds on other superlinear operations on lists such as longest common subsequence and removal of duplicates from lists of lists. Furthermore, resource bounds are now closed under composition which improves accuracy of the analysis of composed programs when some or all of the components exhibit superlinear resource or size behavior. The analysis is based on a novel multivariate amortized resource analysis. We present it in form of a type system for a simple firstorder functional language with lists and trees, prove soundness, and describe automatic type inference based on linear programming. We have experimentally validated the automatic analysis on a wide range of examples from functional programming with lists and trees. The obtained bounds were compared with actual resource consumption. All bounds were asymptotically tight, and the constants were close or even identical to the optimal ones.
Controlflow refinement and progress invariants for bound analysis
 In PLDI
, 2009
"... Symbolic complexity bounds help programmers understand the performance characteristics of their implementations. Existing work provides techniques for statically determining bounds of procedures with simple controlflow. However, procedures with nested loops or multiple paths through a single loop a ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
(Show Context)
Symbolic complexity bounds help programmers understand the performance characteristics of their implementations. Existing work provides techniques for statically determining bounds of procedures with simple controlflow. However, procedures with nested loops or multiple paths through a single loop are challenging. In this paper we describe two techniques, controlflow refinement and progress invariants, that together enable estimation of precise bounds for procedures with nested and multipath loops. Controlflow refinement transforms a multipath loop into a semantically equivalent code fragment with simpler loops by making the structure of path interleaving explicit. We show that this enables nondisjunctive invariant generation tools to find a bound on many procedures for which previous techniques were unable to prove termination. Progress invariants characterize relationships between
The reachabilitybound problem
 In PLDI
, 2010
"... We define the reachabilitybound problem to be the problem of finding a symbolic worstcase bound on the number of times a given control location inside a procedure is visited in terms of the inputs to that procedure. This has applications in bounding resources consumed by a program such as time, me ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
(Show Context)
We define the reachabilitybound problem to be the problem of finding a symbolic worstcase bound on the number of times a given control location inside a procedure is visited in terms of the inputs to that procedure. This has applications in bounding resources consumed by a program such as time, memory, networktraffic, power, as well as estimating quantitative properties (as opposed to boolean properties) of data in programs, such as information leakage or uncertainty propagation. Our approach to solving the reachabilitybound problem brings together two different techniques for reasoning about loops in an effective manner. One of these techniques is an abstractinterpretation based iterative technique for computing precise disjunctive invariants (to summarize nested loops). The other technique is a noniterative proofrules based technique (for loop bound computation) that takes over the role of doing inductive reasoning, while deriving its power from the use of SMT solvers to reason about abstract loopfree fragments. Our solution to the reachabilitybound problem allows us to compute precise symbolic complexity bounds for several loops in.Net baseclass libraries for which earlier techniques fail. We also illustrate the precision of our algorithm for disjunctive invariant computation (which has a more general applicability beyond the reachabilitybound problem) on a set of benchmark examples.
Costa: Design and implementation of a cost and termination analyzer for java bytecode
 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2007
"... ... interpretation based cost and termination analyzer for Java bytecode. The system receives as input a bytecode program, (a choice of) a resource of interest and tries to obtain an upper bound of the resource consumption of the program. costa provides several nontrivial notions of cost, as the co ..."
Abstract

Cited by 35 (15 self)
 Add to MetaCart
(Show Context)
... interpretation based cost and termination analyzer for Java bytecode. The system receives as input a bytecode program, (a choice of) a resource of interest and tries to obtain an upper bound of the resource consumption of the program. costa provides several nontrivial notions of cost, as the consumption of the heap, the number of bytecode instructions executed and the number of calls to a specific method. Additionally, costa tries to prove termination of the bytecode program which implies the boundedness of any resource consumption. Having cost and termination together is interesting, as both analyses share most of the machinery to, respectively, infer cost upper bounds and to prove that the execution length is always finite (i.e., the program terminates). We report on experimental results which show that costa can deal with programs of realistic size and complexity, including programs which use Java libraries. To the best of our knowledge, this system provides for the first time evidence that resource usage analysis can be applied to a realistic objectoriented, bytecode programming language.
Amortized Resource Analysis with Polynomial Potential A Static Inference of Polynomial Bounds for Functional Programs (Extended Version)
"... Abstract. In 2003, Hofmann and Jost introduced a type system that uses a potentialbased amortized analysis to infer bounds on the resource consumption of (firstorder) functional programs. This analysis has been successfully applied to many standard algorithms but is limited to bounds that are line ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
(Show Context)
Abstract. In 2003, Hofmann and Jost introduced a type system that uses a potentialbased amortized analysis to infer bounds on the resource consumption of (firstorder) functional programs. This analysis has been successfully applied to many standard algorithms but is limited to bounds that are linear in the size of the input. Here we extend this system to polynomial resource bounds. An automatic amortized analysis is used to infer these bounds for functional programs without further annotations if a maximal degree for the bounding polynomials is given. The analysis is generic in the resource and can obtain good bounds on heapspace, stackspace and time usage.
SizeChange Termination, Monotonicity Constraints and Ranking Functions
"... Abstract. Sizechange termination involves deducing program termination based on the impossibility of infinite descent. To this end we may use a program abstraction in which transitions are described by monotonicity constraints over (abstract) variables. When only constraints of the form x> y ′ a ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Sizechange termination involves deducing program termination based on the impossibility of infinite descent. To this end we may use a program abstraction in which transitions are described by monotonicity constraints over (abstract) variables. When only constraints of the form x> y ′ and x ≥ y ′ are allowed, we have sizechange graphs, for which both theory and practice are now more evolved then for general monotonicity constraints. This work shows that it is possible to transfer some theory from the domain of sizechange graphs to the general case, complementing and extending previous work on monotonicity constraints. Significantly, we provide a procedure to construct explicit global ranking functions from monotonicity constraints in singlyexponential time, which is better than what has been published so far even for sizechange graphs. We also consider the integer domain, where general monotonicity constraints are essential because the domain is not wellfounded. 1
Amortised memory analysis using the depth of data structures
 In Proceedings of ESOP 2009: European Symposium on Programming, LNCS 5502
, 2009
"... Abstract. Hofmann and Jost have presented a heap space analysis [1] that finds linear space bounds for many functional programs. It uses an amortised analysis: assigning hypothetical amounts of free space (called potential) to data structures in proportion to their sizes using type annotations. Cons ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Hofmann and Jost have presented a heap space analysis [1] that finds linear space bounds for many functional programs. It uses an amortised analysis: assigning hypothetical amounts of free space (called potential) to data structures in proportion to their sizes using type annotations. Constraints on these annotations in the type system ensure that the total potential assigned to the input is an upper bound on the total memory required to satisfy all allocations. We describe a related system for bounding the stack space requirements which uses the depth of data structures, by expressing potential in terms of maxima as well as sums. This is achieved by adding extra structure to typing contexts (inspired by O’Hearn’s bunched typing [2]) to describe the form of the bounds. We will also present the extra steps that must be taken to construct a typing during the analysis. Obtaining bounds on the resource requirements of programs can be crucial for ensuring that they enjoy reliability and security properties, particularly for use in
L.: ABC: Algebraic Bound Computation for Loops
"... Abstract. We present ABC, a software tool for automatically computing symbolic upper bounds on the number of iterations of nested program loops. The system combines static analysis of programs with symbolic summation techniques to derive loop invariant relations between program variables. Iteration ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present ABC, a software tool for automatically computing symbolic upper bounds on the number of iterations of nested program loops. The system combines static analysis of programs with symbolic summation techniques to derive loop invariant relations between program variables. Iteration bounds are obtained from the inferred invariants, by replacing variables with bounds on their greatest values. We have successfully applied ABC to a large number of examples. The derived symbolic bounds express nontrivial polynomial relations over loop variables. We also report on results to automatically infer symbolic expressions over harmonic numbers as upper bounds on loop iteration counts. 1
SPEED: Symbolic Complexity Bound Analysis Invited Talk
"... Abstract. The SPEED project addresses the problem of computing symbolic computational complexity bounds of procedures in terms of their inputs. We discuss some of the challenges that arise and present various orthogonal/complementary techniques recently developed in the SPEED project for addressing ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The SPEED project addresses the problem of computing symbolic computational complexity bounds of procedures in terms of their inputs. We discuss some of the challenges that arise and present various orthogonal/complementary techniques recently developed in the SPEED project for addressing these challenges. 1