Results 1  10
of
17
Efficient First Order Functional Program Interpreter With Time Bound Certifications
, 2000
"... We demonstrate that the class of rst order functional programs over lists which terminate by multiset path ordering and admit a polynomial quasiinterpretation, is exactly the class of function computable in polynomial time. The interest of this result lies (i) on the simplicity of the conditions on ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
We demonstrate that the class of rst order functional programs over lists which terminate by multiset path ordering and admit a polynomial quasiinterpretation, is exactly the class of function computable in polynomial time. The interest of this result lies (i) on the simplicity of the conditions on programs to certify their complexity, (ii) on the fact that an important class of natural programs is captured, (iii) and on potential applications on program optimizations. 1 Introduction This paper is part of a general investigation on the implicit complexity of a specication. To illustrate what we mean, we write below the recursive rules that computes the longest common subsequences of two words. More precisely, given two strings u = u1 um and v = v1 vn of f0; 1g , a common subsequence of length k is dened by two sequences of indices i 1 < < i k and j1 < < jk satisfying u i q = v j q . lcs(; y) ! 0 lcs(x; ) ! 0 lcs(i(x); i(y)) ! lcs(x; y) + 1 lcs(i(...
Analysing the Implicit Complexity of Programs
, 2000
"... We construct a termination ordering, called light multiset path ordering (LMPO), which is a restriction of the multiset path ordering. We establish that the class of programs based on rewriting rules on lists which is terminating by LMPO, characterises exactly the functions computable in polynomial ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
We construct a termination ordering, called light multiset path ordering (LMPO), which is a restriction of the multiset path ordering. We establish that the class of programs based on rewriting rules on lists which is terminating by LMPO, characterises exactly the functions computable in polynomial time.
Multivariate Amortized Resource Analysis
, 2010
"... We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. In this paper we generalize this to arbitrary multivariate polynomial functions thus allowing bounds of the form mn which had to be grossly overestimated by m 2 + n 2 before. Our framework even encompasses bounds like ∑ i,j≤n mimj where the mi are the sizes of the entries of a list of length n. This allows us for the first time to derive useful resource bounds for operations on matrices that are represented as lists of lists and to considerably improve bounds on other superlinear operations on lists such as longest common subsequence and removal of duplicates from lists of lists. Furthermore, resource bounds are now closed under composition which improves accuracy of the analysis of composed programs when some or all of the components exhibit superlinear resource or size behavior. The analysis is based on a novel multivariate amortized resource analysis. We present it in form of a type system for a simple firstorder functional language with lists and trees, prove soundness, and describe automatic type inference based on linear programming. We have experimentally validated the automatic analysis on a wide range of examples from functional programming with lists and trees. The obtained bounds were compared with actual resource consumption. All bounds were asymptotically tight, and the constants were close or even identical to the optimal ones.
Static Determination of Quantitative Resource Usage for HigherOrder Programs
 IN: 37TH ACM SYMP. ON PRINCIPLES OF PROG. LANGS
, 2010
"... We describe a new automatic static analysis for determining upperbound functions on the use of quantitative resources for strict, higherorder, polymorphic, recursive programs dealing with possiblyaliased data. Our analysis is a variant of Tarjan’s manual amortised cost analysis technique. We use ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We describe a new automatic static analysis for determining upperbound functions on the use of quantitative resources for strict, higherorder, polymorphic, recursive programs dealing with possiblyaliased data. Our analysis is a variant of Tarjan’s manual amortised cost analysis technique. We use a typebased approach, exploiting linearity to allow inference, and place a new emphasis on the number of references to a data object. The bounds we infer depend on the sizes of the various inputs to a program. They thus expose the impact of specific inputs on the overall cost behaviour. The key novel aspect of our work is that it deals directly with polymorphic higherorder functions without requiring sourcelevel transformations that could alter resource usage. We thus obtain safe and accurate compiletime bounds. Our work is generic in that it deals with a variety of quantitative resources. We illustrate our approach with reference to dynamic memory allocations/deallocations, stack usage, and worstcase execution time, using metrics taken from a real implementation on a simple microcontroller platform that is used in safetycritical automotive applications.
A LargeScale Experiment in Executing Extracted Programs
"... It is a wellknown fact that algorithms are often hidden inside mathematical proofs. If these proofs are formalized inside a proof assistant, then a mechanism called extraction can generate the corresponding programs automatically. Previous work has focused on the difficulties in obtaining a program ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
It is a wellknown fact that algorithms are often hidden inside mathematical proofs. If these proofs are formalized inside a proof assistant, then a mechanism called extraction can generate the corresponding programs automatically. Previous work has focused on the difficulties in obtaining a program from a formalization of the Fundamental Theorem of Algebra inside the Coq proof assistant. In theory, this program allows one to compute approximations of roots of polynomials. However, as we show in this work, there is currently a big gap between theory and practice. We study the complexity of the extracted program and analyze the reasons of its inefficiency, showing that this is a direct consequence of the approach used throughout the formalization.
Writing Constructive Proofs Yielding Efficient Extracted Programs
"... The NuPRL system [3] was designed for interactive writing of machinechecked constructive proofs and for extracting algorithms from the proofs. The extracted algorithms are guaranteed to be correct * which makes it possible to use NuPRL as a programming language with builtin verification[1,5,7,8,9, ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The NuPRL system [3] was designed for interactive writing of machinechecked constructive proofs and for extracting algorithms from the proofs. The extracted algorithms are guaranteed to be correct * which makes it possible to use NuPRL as a programming language with builtin verification[1,5,7,8,9,10]. However it turned out that proofs written without algorithmic efficiency in mind often produce very inefficient algorithms  exponential and doubleexponential ones for problems that can be solved in polynomial time. In this paper we present...
Naïve computational type theory
 Proof and SystemReliability, Proceedings of International Summer School Marktoberdorf, July 24 to August 5, 2001, volume 62 of NATO Science Series III
, 2002
"... The basic concepts of type theory are fundamental to computer science, logic and mathematics. Indeed, the language of type theory connects these regions of science. It plays a role in computing and information science akin to that of set theory in pure mathematics. There are many excellent accounts ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The basic concepts of type theory are fundamental to computer science, logic and mathematics. Indeed, the language of type theory connects these regions of science. It plays a role in computing and information science akin to that of set theory in pure mathematics. There are many excellent accounts of the basic ideas of type theory, especially at the interface of computer science and logic — specifically, in the literature of programming languages, semantics, formal methods and automated reasoning. Most of these are very technical, dense with formulas, inference rules, and computation rules. Here we follow the example of the mathematician Paul Halmos, who in 1960 wrote a 104page book called Naïve Set Theory intended to make the subject accessible to practicing mathematicians. His book served many generations well. This article follows the spirit of Halmos ’ book and introduces type theory without recourse to precise axioms and inference rules, and with a minimum of formalism. I start by paraphrasing the preface to Halmos ’ book. The sections of this article follow his chapters closely. Every computer scientist agrees that every computer scientist must know some type theory; the disagreement begins in trying to decide how much is some. This article contains my partial answer to that question. The purpose of the article is to tell the beginning student of advanced computer science the basic type theoretic facts of life, and to do so with a minimum of philosophical discourse and logical formalism. The point throughout is that of a prospective computer scientist eager to study programming languages, or database systems, or computational complexity theory, or distributed systems or information discovery. In type theory, “naïve ” and “formal ” are contrasting words. The present treatment might best be described as informal type theory from a naïve point of view. The concepts are very general and very abstract; therefore they may
PURRS: Towards Computer Algebra Support for Fully Automatic WorstCase Complexity Analysis ⋆
, 2005
"... Abstract. Fully automatic worstcase complexity analysis has a number of applications in computerassisted program manipulation. A classical and powerful approach to complexity analysis consists in formally deriving, from the program syntax, a set of constraints expressing bounds on the resources re ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Fully automatic worstcase complexity analysis has a number of applications in computerassisted program manipulation. A classical and powerful approach to complexity analysis consists in formally deriving, from the program syntax, a set of constraints expressing bounds on the resources required by the program, which are then solved, possibly applying safe approximations. In several interesting cases, these constraints take the form of recurrence relations. While techniques for solving recurrences are known and implemented in several computer algebra systems, these do not completely fulfill the needs of fully automatic complexity analysis: they only deal with a somewhat restricted class of recurrence relations, or sometimes require user intervention, or they are restricted to the computation of exact solutions that are often so complex to be unmanageable, and thus useless in practice. In this paper we briefly describe PURRS, a system and software library aimed at providing all the computer algebra services needed by applications performing or exploiting the results of worstcase complexity analyses. The capabilities of the system are illustrated by means of examples derived from the analysis of programs written in a domainspecific functional programming language for realtime embedded systems.