Results 1  10
of
27
Multivariate Amortized Resource Analysis
, 2010
"... We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. In this paper we generalize this to arbitrary multivariate polynomial functions thus allowing bounds of the form mn which had to be grossly overestimated by m 2 + n 2 before. Our framework even encompasses bounds like ∑ i,j≤n mimj where the mi are the sizes of the entries of a list of length n. This allows us for the first time to derive useful resource bounds for operations on matrices that are represented as lists of lists and to considerably improve bounds on other superlinear operations on lists such as longest common subsequence and removal of duplicates from lists of lists. Furthermore, resource bounds are now closed under composition which improves accuracy of the analysis of composed programs when some or all of the components exhibit superlinear resource or size behavior. The analysis is based on a novel multivariate amortized resource analysis. We present it in form of a type system for a simple firstorder functional language with lists and trees, prove soundness, and describe automatic type inference based on linear programming. We have experimentally validated the automatic analysis on a wide range of examples from functional programming with lists and trees. The obtained bounds were compared with actual resource consumption. All bounds were asymptotically tight, and the constants were close or even identical to the optimal ones.
Inferring cost equations for recursive, polymorphic and higherorder functional programs
 In Proceedings of the 15th International Workshop on Implementation of Functional Languages, IFL’03
, 2004
"... Abstract This paper presents a typebased analysis for inferring sizeand costequations for recursive, higherorder and polymorphic functional programs without requiring user annotations or unusual syntax. Our type reconstruction algorithm is capable of inferring cost equations for a subset of rec ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
(Show Context)
Abstract This paper presents a typebased analysis for inferring sizeand costequations for recursive, higherorder and polymorphic functional programs without requiring user annotations or unusual syntax. Our type reconstruction algorithm is capable of inferring cost equations for a subset of recursive programs whose costs can be expressed using primitive recursion. We illustrate the approach with reference to some standard examples of recursive programs. 1
Efficient First Order Functional Program Interpreter With Time Bound Certifications
, 2000
"... We demonstrate that the class of rst order functional programs over lists which terminate by multiset path ordering and admit a polynomial quasiinterpretation, is exactly the class of function computable in polynomial time. The interest of this result lies (i) on the simplicity of the conditions on ..."
Abstract

Cited by 34 (13 self)
 Add to MetaCart
We demonstrate that the class of rst order functional programs over lists which terminate by multiset path ordering and admit a polynomial quasiinterpretation, is exactly the class of function computable in polynomial time. The interest of this result lies (i) on the simplicity of the conditions on programs to certify their complexity, (ii) on the fact that an important class of natural programs is captured, (iii) and on potential applications on program optimizations. 1 Introduction This paper is part of a general investigation on the implicit complexity of a specication. To illustrate what we mean, we write below the recursive rules that computes the longest common subsequences of two words. More precisely, given two strings u = u1 um and v = v1 vn of f0; 1g , a common subsequence of length k is dened by two sequences of indices i 1 < < i k and j1 < < jk satisfying u i q = v j q . lcs(; y) ! 0 lcs(x; ) ! 0 lcs(i(x); i(y)) ! lcs(x; y) + 1 lcs(i(...
Analysing the Implicit Complexity of Programs
, 2000
"... We construct a termination ordering, called light multiset path ordering (LMPO), which is a restriction of the multiset path ordering. We establish that the class of programs based on rewriting rules on lists which is terminating by LMPO, characterises exactly the functions computable in polynomial ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
We construct a termination ordering, called light multiset path ordering (LMPO), which is a restriction of the multiset path ordering. We establish that the class of programs based on rewriting rules on lists which is terminating by LMPO, characterises exactly the functions computable in polynomial time.
Static Determination of Quantitative Resource Usage for HigherOrder Programs
 IN: 37TH ACM SYMP. ON PRINCIPLES OF PROG. LANGS
, 2010
"... We describe a new automatic static analysis for determining upperbound functions on the use of quantitative resources for strict, higherorder, polymorphic, recursive programs dealing with possiblyaliased data. Our analysis is a variant of Tarjan’s manual amortised cost analysis technique. We use ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
(Show Context)
We describe a new automatic static analysis for determining upperbound functions on the use of quantitative resources for strict, higherorder, polymorphic, recursive programs dealing with possiblyaliased data. Our analysis is a variant of Tarjan’s manual amortised cost analysis technique. We use a typebased approach, exploiting linearity to allow inference, and place a new emphasis on the number of references to a data object. The bounds we infer depend on the sizes of the various inputs to a program. They thus expose the impact of specific inputs on the overall cost behaviour. The key novel aspect of our work is that it deals directly with polymorphic higherorder functions without requiring sourcelevel transformations that could alter resource usage. We thus obtain safe and accurate compiletime bounds. Our work is generic in that it deals with a variety of quantitative resources. We illustrate our approach with reference to dynamic memory allocations/deallocations, stack usage, and worstcase execution time, using metrics taken from a real implementation on a simple microcontroller platform that is used in safetycritical automotive applications.
A LargeScale Experiment in Executing Extracted Programs
"... It is a wellknown fact that algorithms are often hidden inside mathematical proofs. If these proofs are formalized inside a proof assistant, then a mechanism called extraction can generate the corresponding programs automatically. Previous work has focused on the difficulties in obtaining a program ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
It is a wellknown fact that algorithms are often hidden inside mathematical proofs. If these proofs are formalized inside a proof assistant, then a mechanism called extraction can generate the corresponding programs automatically. Previous work has focused on the difficulties in obtaining a program from a formalization of the Fundamental Theorem of Algebra inside the Coq proof assistant. In theory, this program allows one to compute approximations of roots of polynomials. However, as we show in this work, there is currently a big gap between theory and practice. We study the complexity of the extracted program and analyze the reasons of its inefficiency, showing that this is a direct consequence of the approach used throughout the formalization.
Näıve computational type theory
 Proof and SystemReliability, Proceedings of International Summer School Marktoberdorf, July 24 to August 5, 2001, volume 62 of NATO Science Series III
, 2002
"... ..."
(Show Context)