Results 11  20
of
28
Computational Divided Differencing and DividedDifference Arithmetics
, 2000
"... Tools for computational differentiation transform a program that computes a numerical function F (x) into a related program that computes F 0 (x) (the derivative of F ). This paper describes how techniques similar to those used in computationaldifferentiation tools can be used to implement other pr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Tools for computational differentiation transform a program that computes a numerical function F (x) into a related program that computes F 0 (x) (the derivative of F ). This paper describes how techniques similar to those used in computationaldifferentiation tools can be used to implement other program transformations  in particular, a variety of transformations for computational divided differencing . The specific technical contributions of the paper are as follows: It presents a program transformation that, given a numerical function F (x) de ned by a program, creates a program that computes F [x0 ; x1 ], the first divided difference of F(x), where F [x0 ; x1 ] def = F (x 0 ) F (x 1 ) x 0 x 1 if x0 6= x1 d dz F (z); evaluated at z = x0 if x0 = x1 It shows how computational first divided differencing generalizes computational differentiation. It presents a second program transformation that permits the creation of higherorder divided differences of a numerical function de ...
Practical aspects of multistage programming, rice University
, 2004
"... Abstract. Highlevel languages offer abstraction mechanisms that can reduce development time and improve software quality. But abstraction mechanisms often have an accumulative runtime overhead that can discourage their use. Multistage programming (MSP) languages offer constructs that make it possi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Highlevel languages offer abstraction mechanisms that can reduce development time and improve software quality. But abstraction mechanisms often have an accumulative runtime overhead that can discourage their use. Multistage programming (MSP) languages offer constructs that make it possible to use abstraction mechanisms without paying a runtime overhead. This paper studies applying MSP to implementing dynamic programming (DP) problems. The study reveals that staging highlevel implementations of DP algorithms naturally leads to a code explosion problem. In addition, it is common that highlevel languages are not designed to deliver the kind of performance that is desirable in implementations of such algorithms. The paper proposes a solution to each of these two problems. Staged memoization is used for code explosion, and a kind of “offshoring ” translation is used to address the second. For basic DP problems, the performance of the resulting specialized C implementations is almost always better than the handwritten generic C implementations. 1
Program Parallelization using Synchronized Pipelining
"... Abstract. While there are wellunderstood methods for detecting loops whose iterations are independent and parallelizing them, there are comparatively fewer proposals that support parallel execution of a sequence of loops or nested loops in the case where such loops have dependencies among them. Thi ..."
Abstract
 Add to MetaCart
Abstract. While there are wellunderstood methods for detecting loops whose iterations are independent and parallelizing them, there are comparatively fewer proposals that support parallel execution of a sequence of loops or nested loops in the case where such loops have dependencies among them. This paper introduces a refined notion of independence, called eventual independence, that in its simplest form considers two loops, say loop 1 and loop 2, and captures the idea that for every i there exists k such that the i + 1th iteration of loop 2 is independent from the jth iteration of loop 1, for all j ≥ k. Eventual independence provides the foundation of a semanticspreserving program transformation, called synchronized pipelining, that makes execution of consecutive or nested loops parallel, relying on a minimal number of synchronization events to ensure semantics preservation. The practical benefits of synchronized pipelining are demonstrated through experimental results on common algorithms such as sorting and Fourier transforms. 1
Copyright Notice
"... IP Fast Reroute Framework This document provides a framework for the development of IP fastreroute mechanisms that provide protection against link or router failure by invoking locally determined repair paths. Unlike MPLS fastreroute, the mechanisms are applicable to a network employing conventiona ..."
Abstract
 Add to MetaCart
IP Fast Reroute Framework This document provides a framework for the development of IP fastreroute mechanisms that provide protection against link or router failure by invoking locally determined repair paths. Unlike MPLS fastreroute, the mechanisms are applicable to a network employing conventional IP routing and forwarding. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at
The Internet and the World Wide Web continue to grow as
"... This paper investigates the advantages of the functional language paradigm and its use in secure programming. The intended audience is software professionals from either the computer security domain or the functional language domain who have not yet considered crossdomain synthesis of ideas. Secure ..."
Abstract
 Add to MetaCart
(Show Context)
This paper investigates the advantages of the functional language paradigm and its use in secure programming. The intended audience is software professionals from either the computer security domain or the functional language domain who have not yet considered crossdomain synthesis of ideas. Secure programming describes those practices that software developers use to provide security features in their applications. To study its relationship to software development, secure programming can be divided into the following categories: safe program initialization, access control, input validation, cryptography, safe networking, safe random number generation, and antitampering. Software in these categories has historically been coded in imperative languages. More recently, objectoriented languages such as Java have also been used. What about a functional language such as Haskell? Does this language offer something new to secure programming? This paper provides an answer to that question. It lists features in Haskell that provide security benefits, identifies how Haskell is already serving the needs of some of the secure programming practices, and demonstrates how the CAST128 encryption algorithm can be implemented successfully and efficiently in Haskell when the code is compiled rather than interpreted. The paper also compares the Haskell execution results to a similar implementation in C.
Control Flow Analysis for Recursion Removal
"... Abstract. In this paper a new method for removing recursion from algorithms is demonstrated. The method for removing recursion is based on algebraic manipulations of a mathematical model of the control flow. The method is not intended to solve all possible recursion removal problems, but instead can ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper a new method for removing recursion from algorithms is demonstrated. The method for removing recursion is based on algebraic manipulations of a mathematical model of the control flow. The method is not intended to solve all possible recursion removal problems, but instead can be seen as one tool in a larger tool box of program transformations. Our method can handle certain types of recursion that are not easily handled by existing methods, but it may be overkill for certain types of recursion where existing methods can be applied, like tailrecursion. The motivation for a new method is discussed and it is illustrated on an MPEG4 visual texture decoding algorithm. 1
by
, 2006
"... I hereby declare that this thesis is of my own composition, and that it contains no material previously submitted for the award of any other degree. The work reported in this thesis has been executed by myself, except where due acknowledgement is made in the text. ii Anna R. Parker This thesis inve ..."
Abstract
 Add to MetaCart
I hereby declare that this thesis is of my own composition, and that it contains no material previously submitted for the award of any other degree. The work reported in this thesis has been executed by myself, except where due acknowledgement is made in the text. ii Anna R. Parker This thesis investigates the evolutionary plausibility of the Minimalist Program. Is such a theory of language reasonable given the assumption that the human linguistic capacity has been subject to the usual forces and processes of evolution? More generally, this thesis is a comment on the manner in which theories of language can and should be constrained. What are the constraints that must be taken into account when constructing a theory of language? These questions are addressed by applying evidence gathered in evolutionary biology to data
Staging Dynamic Programming Algorithms
, 2005
"... Applications of dynamic programming (DP) algorithms are numerous, and include genetic engineering and operations research problems. At a high level, DP algorithms are specified as a system of recursive equations implemented using memoization. The recursive nature of these equations suggests that the ..."
Abstract
 Add to MetaCart
Applications of dynamic programming (DP) algorithms are numerous, and include genetic engineering and operations research problems. At a high level, DP algorithms are specified as a system of recursive equations implemented using memoization. The recursive nature of these equations suggests that they can be written naturally in a functional language. However, the requirement for memoization poses a subtle challenge: memoization can be implemented using monads, but a systematic treatment introduces several layers of abstraction that can have a prohibitive runtime overhead. Inspired by other researchers' experience with automatic specialization (partial evaluation), this paper investigates the feasibility of explicitly staging DP algorithms in the functional setting. We find that the key challenge is code duplication (which is automatically handled by partial evaluators), and show that a key source of code duplication can be isolated and addressed once and for all. The result is a simple combinator library. We use this library to implement several standard DP algorithms including ones in standard algorithm textbooks (e.g.
Optimizing the Stack Size of Recursive Functions
"... For memory constrained environments, optimization for program size is often as important as, if not more important than, optimization for execution speed. Commonly, compilers try to reduce the code segment but neglect the stack segment, although the stack can significantly grow during the execution ..."
Abstract
 Add to MetaCart
(Show Context)
For memory constrained environments, optimization for program size is often as important as, if not more important than, optimization for execution speed. Commonly, compilers try to reduce the code segment but neglect the stack segment, although the stack can significantly grow during the execution of recursive functions because a separate activation record is required for each recursive call. If a formal parameter or local variable is dead at all recursive calls, then it can be declared global so that only one instance exists independent of the recursion depth. We found that in 70 % of our benchmark functions, it is possible to reduce the stack size by declaring formal parameters and local variables global. Often, live ranges of formal parameters and local variables can be split at recursive calls through program transformations. These splitting transformations allowed us to further optimize the stack size of all our benchmark functions. If all formal parameters and local variables can be declared global, then such functions may be transformable into iterations. This was possible for all such benchmark functions.