Results 1  10
of
13
Static Prediction of Heap Space Usage for Firstorder Functional Programs
 in Symposium on Principles of Programming Languages (POPL’03
, 2003
"... Categories and Subject Descriptors We show how to efficiently obtain linear a priori bounds on the heap space consumption of firstorder functional programs. The analysis takes space reuse by explicit deallocation into account and also furnishes an upper bound on the heap usage in the presence of ga ..."
Abstract

Cited by 130 (24 self)
 Add to MetaCart
Categories and Subject Descriptors We show how to efficiently obtain linear a priori bounds on the heap space consumption of firstorder functional programs. The analysis takes space reuse by explicit deallocation into account and also furnishes an upper bound on the heap usage in the presence of garbage collection. It covers a wide variety of examples including, for instance, the familiar sorting algorithms for lists, including quicksort. The analysis relies on a type system with resource annotations. Linear programming (LP) is used to automatically infer derivations in this enriched type system. We also show that integral solutions to the linear programs derived correspond to programs that can be evaluated without any operating system support for memory management. The particular integer linear programs arising in this way are shown to be feasibly solvable under mild assumptions.
Speeding Up The Computations On An Elliptic Curve Using AdditionSubtraction Chains
 Theoretical Informatics and Applications
, 1990
"... We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up acco ..."
Abstract

Cited by 100 (4 self)
 Add to MetaCart
We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up accordingly the factorization and primality testing algorithms using elliptic curves. 1. Introduction. Recent algorithms used in primality testing and integer factorization make use of elliptic curves defined over finite fields or Artinian rings (cf. Section 2). One can define over these sets an abelian law. As a consequence, one can transpose over the corresponding groups all the classical algorithms that were designed over Z/NZ. In particular, one has the analogue of the p \Gamma 1 factorization algorithm of Pollard [29, 5, 20, 22], the Fermatlike primality testing algorithms [1, 14, 21, 26] and the public key cryptosystems based on RSA [30, 17, 19]. The basic operation performed on an elli...
Random Mapping Statistics
 IN ADVANCES IN CRYPTOLOGY
, 1990
"... Random mappings from a finite set into itself are either a heuristic or an exact model for a variety of applications in random number generation, computational number theory, cryptography, and the analysis of algorithms at large. This paper introduces a general framework in which the analysis of ..."
Abstract

Cited by 78 (6 self)
 Add to MetaCart
Random mappings from a finite set into itself are either a heuristic or an exact model for a variety of applications in random number generation, computational number theory, cryptography, and the analysis of algorithms at large. This paper introduces a general framework in which the analysis of about twenty characteristic parameters of random mappings is carried out: These parameters are studied systematically through the use of generating functions and singularity analysis. In particular, an open problem of Knuth is solved, namely that of finding the expected diameter of a random mapping. The same approach is applicable to a larger class of discrete combinatorial models and possibilities of automated analysis using symbolic manipulation systems ("computer algebra") are also briefly discussed.
Automatic Accurate TimeBound Analysis for HighLevel Languages
 In Proceedings of the ACM SIGPLAN 1998 Workshop on Languages, Compilers, and Tools for Embedded Systems, volume 1474 of Lecture Notes in Computer Science
, 1998
"... This paper describes a general approach for automatic and accurate timebound analysis. The approach consists of transformations for building timebound functions in the presence of partially known input structures, symbolic evaluation of the timebound function based on input parameters, optimizati ..."
Abstract

Cited by 38 (9 self)
 Add to MetaCart
This paper describes a general approach for automatic and accurate timebound analysis. The approach consists of transformations for building timebound functions in the presence of partially known input structures, symbolic evaluation of the timebound function based on input parameters, optimizations to make the overall analysis efficient as well as accurate, and measurements of primitive parameters, all at the sourcelanguage level. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The measured worstcase times are closely bounded by the calculated bounds. 1 Introduction Analysis of program running time is important for realtime systems, interactive environments, compiler optimizations, performance evaluation, and many other computer applications. It has been extensively studied in many fields of computer science: algorithms [20, 12, 13, 41], programming languages [38, 21, 30, 33], and systems [35, 28, 32, 31]. It is particularl...
Automatic timebound analysis for a higherorder language
 In Proceedings of the ACM SIGPLAN 2002 Workshop on Partial Evaluation and SemanticsBased Program Manipulation
, 2002
"... Analysis of program running time is important for reactive systems, interactive environments, compiler optimizations, performance evaluation, and many other computer applications. It has been extensively studied in many elds of computer science: algorithms [21, 12, 13,40], programming languages [38, ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Analysis of program running time is important for reactive systems, interactive environments, compiler optimizations, performance evaluation, and many other computer applications. It has been extensively studied in many elds of computer science: algorithms [21, 12, 13,40], programming languages [38, 22,31, 35, 34], and systems [36, 29,33,32]. Being able to predict accurate time bounds automatically and e ciently
Automatic Accurate CostBound Analysis for HighLevel Languages
, 2001
"... This paper describes a languagebased approach for automatic and accurate costbound analysis. The approach consists of transformations for building costbound functions in the presence of partially known input structures, symbolic evaluation of the costbound function based on input size parameter ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
This paper describes a languagebased approach for automatic and accurate costbound analysis. The approach consists of transformations for building costbound functions in the presence of partially known input structures, symbolic evaluation of the costbound function based on input size parameters, and optimizations to make the overall analysis ecient as well as accurate, all at the sourcelanguage level. The calculated cost bounds are expressed in terms of primitive cost parameters. These parameters can be obtained based on the language implementation or be measured conservatively or approximately, yielding accurate, conservative, or approximate time or space bounds. We have implemented this approach and performed a number of experiments for analyzing Scheme programs. The results helped conrm the accuracy of the analysis.
Automatic accurate stack space and heap space analysis for highlevel languages
, 2000
"... This paper describes a general approach for automatic and accurate space and spacebound analyses for highlevel languages, considering stack space, heap allocation and live heap space usage of programs. The approach is based on program analysis and transformations and is fully automatic. The analys ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
This paper describes a general approach for automatic and accurate space and spacebound analyses for highlevel languages, considering stack space, heap allocation and live heap space usage of programs. The approach is based on program analysis and transformations and is fully automatic. The analyses produce accurate upper bounds in the presence of partially known input structures. The analyses have been implemented, and experimental results con rm the accuracy. 1
A Complexity Calculus for ObjectOriented Programs
 Journal of ObjectOriented Systems
, 1994
"... Modern imperative objectoriented design methods and languages take a rigorous approach to compatibility and reusability mainly from an interface and specification point of view  if at all. Beside functional specification, however, users select classes from libraries based on performance characte ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Modern imperative objectoriented design methods and languages take a rigorous approach to compatibility and reusability mainly from an interface and specification point of view  if at all. Beside functional specification, however, users select classes from libraries based on performance characteristics, too. This article develops an appropriate fundamental approach towards performance estimation, measurement and metering in OO approaches. We use examples written in the Sather language to demonstrate the concepts of socalled OOmachines, which lend themselves to performance metrics, and a calculus for reasoning about performance. A language binding of these concepts is then sketched in the form of cost annotations that allow programmers to file classes in libraries welldocumented with cost related specifications. These annotations can optionally be used for instrumenting code that meters cost and checks whether the taken measurements are consistent with the given specification. In...
Reasoning about Complexity of ObjectOriented Programs
, 1994
"... This report develops an appropriate fundamental approach towards performance estimation, measurement and metering in OO approaches. We use examples written in the Sather language to demonstrate the concepts of socalled OOmachines, which lend themselves to performance metrics, and a calculus for re ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
This report develops an appropriate fundamental approach towards performance estimation, measurement and metering in OO approaches. We use examples written in the Sather language to demonstrate the concepts of socalled OOmachines, which lend themselves to performance metrics, and a calculus for reasoning about performance. A language binding of these concepts is then sketched in the form of cost annotations that allow programmers to file classes in libraries welldocumented with cost related specifications. These annotations can optionally be used for instrumenting code that meters cost and checks whether the taken measurements are consistent with the given specification. In this way programmers can benefit from cost annotations by means of documentation and rigorous testing without requiring a deep familiarity with the theoretical underpinnings. Keywords: ObjectOriented Languages, Complexity, Amortized Complexity
Probabilistic Recurrence Relations for Parallel DivideandConquer Algorithms
, 1991
"... We study two probabilistic recurrence relations that arise frequently in the analysis of parallel and sequential divideandconquer algorithms (cf. [Karp 91]). Suppose a problem of size x has to be solved. In order to solve it we divide it into subproblems of size h 1 (x); : : : ; h k (x) and the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We study two probabilistic recurrence relations that arise frequently in the analysis of parallel and sequential divideandconquer algorithms (cf. [Karp 91]). Suppose a problem of size x has to be solved. In order to solve it we divide it into subproblems of size h 1 (x); : : : ; h k (x) and these subproblems are solved recursively. We assume that size(h i (z)) are random variables. This occurs if either the break up step is randomized or the instances to be solved are drawn from a probability distribution. The running time T (z) of a parallel algorithm is therefore determined by the maximum of the running times T (h i (z)) of the subproblems while the sequential algorithm is determined by the sum of the running times of the subproblems. We give a method for estimating (tight) upper bounds on the probability distribution of T (x) for these two kinds of recurrence relations, answering the open questions in [Karp 91]. 1 Dept. of Computer Science, University of Bonn, 5300 B...