Results 1  10
of
28
Dynamic programming via static incrementalization
 In Proceedings of the 8th European Symposium on Programming
, 1999
"... Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dyn ..."
Abstract

Cited by 27 (13 self)
 Add to MetaCart
(Show Context)
Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes polynomial time. This paper describes a systematic method for transforming programs written as straightforward recursions into programs that use dynamic programming. The method extends the original program to cache all possibly computed values, incrementalizes the extended program with respect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting incremental program to form an optimized new program. Incrementalization statically exploits semantics of both control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achieving drastic program speedups. Compared with previous methods that perform memoization or tabulation, the method based on incrementalization is more powerful and systematic. It has been implemented and applied to numerous problems and succeeded on all of them. 1
There and back again
 In ICFP ’02: Proceedings of the seventh ACM SIGPLAN international conference on Functional programming
, 2002
"... ..."
(Show Context)
Program Optimization Using Indexed and Recursive Data Structures
, 2002
"... This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determi ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
This paper describes a systematic method for optimizing recursive functions using both indexed and recursive data structures. The method is based on two critical ideas: first, determining a minimal input increment operation so as to compute a function on repeatedly incremented input; second, determining appropriate additional values to maintain in appropriate data structures, based on what values are needed in computation on an incremented input and how these values can be established and accessed. Once these two are determined, the method extends the original program to return the additional values, derives an incremental version of the extended program, and forms an optimized program that repeatedly calls the incremental program. The method can derive all dynamic programming algorithms found in standard algorithm textbooks. There are many previous methods for deriving efficient algorithms, but none is as simple, general, and systematic as ours.
Optimizing Ackermann's Function by Incrementalization
, 2001
"... This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at th ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper describes a formal derivation of an optimized Ackermann's function following a general and systematic method based on incrementalization. The method identifies an appropriate input increment operation and computes the function by repeatedly performing an incremental computation at the step of the increment. This eliminates repeated subcomputations in executions that follow the straightforward recursive definition of Ackermann's function, yielding an optimized program that is drastically faster and takes extremely little space. This case study uniquely shows the power and limitation of the incrementalization method, as well as both the iterative and recursive nature of computation underlying the optimized Ackermann's function.
Strengthening invariants for efficient computation
 in Conference Record of the 23rd Annual ACM Symposium on Principles of Programming Languages
, 2001
"... This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with p ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into ecient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach include strength reduction in optimizing compilers and finite differencing in transformational programming.
Recursive Function Data Allocation to ScratchPad Memory
"... ABSTRACT This paper presents the first automatic scheme to allocate local (stack) data in recursive functions to scratchpad memory (SPM) in embedded systems. A scratchpad is a fast directly addressed compilermanaged SRAM memory that replaces the hardwaremanaged cache. It is motivated by its sign ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT This paper presents the first automatic scheme to allocate local (stack) data in recursive functions to scratchpad memory (SPM) in embedded systems. A scratchpad is a fast directly addressed compilermanaged SRAM memory that replaces the hardwaremanaged cache. It is motivated by its significantly lower access time, energy consumption, realtime bounds, area and overall runtime. Existing compiler methods for allocating data to scratchpad are able to place only code, global, heap and nonrecursive stack data in scratchpad memory; stack data for recursive functions is allocated entirely in DRAM, resulting in poor performance. In this paper we present a dynamic yet compilerdirected allocation method for recursive function stack data that for the first time, is able to place a portion of recursive stack data in scratchpad. It has almost no softwarecaching overhead, and is able to move recursive function data back and forth between scratchpad and DRAM to better track the program’s locality characteristics. With our method, all code, global, stack and heap variables can share the same scratchpad. When compared to placing all recursive function data in DRAM and all other variables in scratchpad, our results show that our method reduces the average runtime of our benchmarks by 29.3%, and the average power consumption by 31.1%, for the same size of scratchpad fixed at 5 % of total data size. Furthermore, significant savings were observed when comparing our method against cachebased alternatives for SPM allocation. Finally, we show results that analyze the effects of profile variation on our allocation approach and present a modified version of our method which minimizes variation for profilebased allocations. 1
A Monadic Approach for Avoiding Code Duplication when Staging Memoized Functions
, 2006
"... Building program generators that do not duplicate generated code can be challenging. At the same time, code duplication can easily increase both generation time and runtime of generated programs by an exponential factor. We identify an instance of this problem that can arise when memoized functions ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Building program generators that do not duplicate generated code can be challenging. At the same time, code duplication can easily increase both generation time and runtime of generated programs by an exponential factor. We identify an instance of this problem that can arise when memoized functions are staged. Without addressing this problem, it would be impossible to effectively stage dynamic programming algorithms. Intuitively, direct staging undoes the effect of memoization. To solve this problem once and for all, and for any function that uses memoization, we propose a staged monadic combinator library. Experimental results confirm that the library works as expected. Preliminary results also indicate that the library is useful even when memoization is not used.
InputCovering Schedules for Multithreaded Programs
"... We propose constraining multithreaded execution to small sets of inputcovering schedules, which we define as follows: given a program P, we say that a set of schedules Σ covers all inputs of program P if, when given any valid input, P’s execution can be constrained to some schedule in Σ and still p ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We propose constraining multithreaded execution to small sets of inputcovering schedules, which we define as follows: given a program P, we say that a set of schedules Σ covers all inputs of program P if, when given any valid input, P’s execution can be constrained to some schedule in Σ and still produce a semanticallyvalid result. Our approach is to first compute a small Σ for a given program P, and then, at runtime, constrain P’s execution to always follow some schedule in Σ, and never deviate. We have designed an algorithm that uses symbolic execution to systematically enumerate a set of inputcovering schedules, Σ. To deal with programs that run for an unbounded length of time, we partition execution into bounded epochs, find inputcovering schedules for each epoch in isolation, and then piece the schedules together at runtime. We have implemented this algorithm and a constrained execution runtime, and we report early results. Our approach has the following advantage: because all possible runtime schedules are known a priori, we can seek to validate the program by thoroughly testing each schedule in Σ, in isolation, without needing to reason about the huge space of thread interleavings that arises due to conventional nondeterministic execution. 1.
Propositional dynamic logic for reasoning about firstclass agent interaction protocols
 Computational Intelligence
, 2010
"... For agents to fulfill their potential of being intelligent and adaptive, it is useful to model their interaction protocols as executable entities that can be referenced, inspected, composed, shared and invoked between agents, all at runtime. We use the term firstclass protocol to refer to such prot ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
For agents to fulfill their potential of being intelligent and adaptive, it is useful to model their interaction protocols as executable entities that can be referenced, inspected, composed, shared and invoked between agents, all at runtime. We use the term firstclass protocol to refer to such protocols. Rather than having hardcoded decision making mechanisms for choosing their next move, agents can inspect the protocol specification at runtime to do so, increasing their flexibility. In this paper, we show that propositional dynamic logic (PDL) can be used to represent and reason about the outcomes of firstclass protocols. We define a proof system for PDL that permits reasoning about recursivelydefined protocols. The proof system is divided into two parts: one for reasoning about terminating protocols, and one for reasoning about nonterminating protocols. We prove that proofs about terminating protocols can be automated, while proofs about nonterminating protocols are unable to be automated in some cases. We prove that, for a restricted class of nonterminating protocols, proofs about them can be transformed to proofs about terminating protocols, making them automatable. Key words: multiagent systems, agent interaction protocols, propositional dynamic logic, firstclass protocols.
Abstract Program Transformation by Solving Recurrences
"... Recursive programs may require large numbers of procedure calls and stack operations, and many such recursive programs exhibit exponential time complexity, due to the time spent recalculating already computed subproblems. As a result, methods which transform a given recursive program to an iterati ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Recursive programs may require large numbers of procedure calls and stack operations, and many such recursive programs exhibit exponential time complexity, due to the time spent recalculating already computed subproblems. As a result, methods which transform a given recursive program to an iterative one have been intensively studied. We propose here a new framework for transforming programs by removing recursion. The framework includes a unified method of deriving low timecomplexity programs by solving recurrences extracted from the program sources. Our prototype system, ������, is an initial implementation of the framework, automatically finding simpler “closed form ” versions of a class of recursive programs. Though in general the solution of recurrences is easier if the functions have only a single recursion parameter, we show a practical technique for solving those with multiple recursion parameters.