Results 1  10
of
13
Programming with bananas, lenses, envelopes and barbed wire
 In FPCA
, 1991
"... We develop a calculus for lazy functional programming based on recursion operators associated with data type definitions. For these operators we derive various algebraic laws that are useful in deriving and manipulating programs. We shall show that all example Functions in Bird and Wadler's "Introdu ..."
Abstract

Cited by 302 (11 self)
 Add to MetaCart
We develop a calculus for lazy functional programming based on recursion operators associated with data type definitions. For these operators we derive various algebraic laws that are useful in deriving and manipulating programs. We shall show that all example Functions in Bird and Wadler's "Introduction to Functional Programming " can be expressed using these operators. 1
Tupling Calculation Eliminates Multiple Data Traversals
 In ACM SIGPLAN International Conference on Functional Programming
, 1997
"... Tupling is a wellknown transformation tactic to obtain new efficient recursive functions by grouping some recursive functions into a tuple. It may be applied to eliminate multiple traversals over the common data structure. The major difficulty in tupling transformation is to find what functions are ..."
Abstract

Cited by 33 (18 self)
 Add to MetaCart
Tupling is a wellknown transformation tactic to obtain new efficient recursive functions by grouping some recursive functions into a tuple. It may be applied to eliminate multiple traversals over the common data structure. The major difficulty in tupling transformation is to find what functions are to be tupled and how to transform the tupled function into an efficient one. Previous approaches to tupling transformation are essentially based on fold/unfold transformation. Though general, they suffer from the high cost of keeping track of function calls to avoid infinite unfolding, which prevents them from being used in a compiler. To remedy this situation, we propose a new method to expose recursive structures in recursive definitions and show how this structural information can be explored for calculating out efficient programs by means of tupling. Our new tupling calculation algorithm can eliminate most of multiple data traversals and is easy to be implemented. 1 Introduction Tupli...
The essence of the Iterator pattern
 McBride, Conor, & Uustalu, Tarmo (eds), Mathematicallystructured functional programming
, 2006
"... The ITERATOR pattern gives a clean interface for elementbyelement access to a collection. Imperative iterations using the pattern have two simultaneous aspects: mapping and accumulating. Various existing functional iterations model one or other of these, but not both simultaneously. We argue that ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
The ITERATOR pattern gives a clean interface for elementbyelement access to a collection. Imperative iterations using the pattern have two simultaneous aspects: mapping and accumulating. Various existing functional iterations model one or other of these, but not both simultaneously. We argue that McBride and Patersonâ€™s idioms, and in particular the corresponding traverse operator, do exactly this, and therefore capture the essence of the ITERATOR pattern. We present some axioms for traversal, and illustrate with a simple example, the repmin problem.
Make it Practical: A Generic LinearTime Algorithm for Solving MaximumWeightsum Problems
 In Proceedings of the 5th ACM SIGPLAN International Conference on Functional Programming (ICFP'00
, 2000
"... In this paper we propose a new method for deriving a practical lineartime algorithm from the specification of a maximumweight sum problem: From the elements of a data structure x, find a subset which satisfies a certain property p and whose weightsum is maximum. Previously proposed methods for aut ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
In this paper we propose a new method for deriving a practical lineartime algorithm from the specification of a maximumweight sum problem: From the elements of a data structure x, find a subset which satisfies a certain property p and whose weightsum is maximum. Previously proposed methods for automatically generating lineartime algorithms are theoretically appealing, but the algorithms generated are hardly useful in practice due to a huge constant factor for space and time. The key points of our approach are to express the property p by a recursive boolean function over the structure x rather than a usual logical predicate and to apply program transformation techniques to reduce the constant factor. We present an optimization theorem, give a calculational strategy for applying the theorem, and demonstrate the effectiveness of our approach through several nontrivial examples which would be difficult to deal with when using the methods previously available.
IterativeFree Program Analysis
 In Proc. of Intl. Conference on Functional Programming
, 2003
"... flow analyses are reduced to the problem of finding a fixed point in a certain transition system, and such fixed point is commonly computed through an iterative procedure that repeats tracing until convergence. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
flow analyses are reduced to the problem of finding a fixed point in a certain transition system, and such fixed point is commonly computed through an iterative procedure that repeats tracing until convergence.
Towards a Modular Program Derivation via Fusion and Tupling
 The First ACM SIGPLAN Conference on Generators and Components, Lecture
, 2002
"... We show how programming pearls can be systematically derived via fusion, followed by tupling transformations. By focusing on the elimination of intermediate data structures (fusion) followed by the elimination of redundant calls (tupling), we systematically realise both space and time efficient algo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We show how programming pearls can be systematically derived via fusion, followed by tupling transformations. By focusing on the elimination of intermediate data structures (fusion) followed by the elimination of redundant calls (tupling), we systematically realise both space and time efficient algorithms from naive specifications. We illustrate our approach using a wellknown maximum segment sum (MSS) problem, and a lessknown maximum segment product (MSP) problem. While the two problems share similar specifications, their optimised codes are significantly different. This divergence in the transformed codes do not pose any difficulty. By relying on modular techniques, we are able to systematically reuse both code and transformation in our derivation.
A Calculational Framework for Parallelization of Sequential Programs
 In International Symposium on Information Systems and Technologies for Network Society
, 1997
"... this paper, we propose ..."
Calculating linear time algorithms for solving maximum weightsum problems
 Computer Software
, 2001
"... In this paper, we propose a new method to derive practical linear time algorithms for maximum weightsum problems. A maximum weightsum problem is specified as follows: given a recursive data x, find an optimal subset of elements of x which not only satisfies certain property p but also maximizes the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper, we propose a new method to derive practical linear time algorithms for maximum weightsum problems. A maximum weightsum problem is specified as follows: given a recursive data x, find an optimal subset of elements of x which not only satisfies certain property p but also maximizes the sum of the weight of elements of the subset. The key point of our approach is to describe the property p as a functional program. This enables us to use program transformation techniques. Based on this approach, we present the optimization theorem, with which we construct a systematic framework to calculate efficient linear time algorithms for maximum weightsum problems on recursive data structures. We demonstrate effectiveness of our approach through several interesting and nontrivial examples, which would be difficult to solve by known approaches.
Generation of Efficient Algorithms for Maximum Marking Problems
"... In existing work on graph algorithms, it is known that a linear time algorithm can be derived mechanically from a logical formula for a class of optimization problems. But this has a serious problem that the derived algorithm has huge constant factor. In this work, we redene this problem on recursiv ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In existing work on graph algorithms, it is known that a linear time algorithm can be derived mechanically from a logical formula for a class of optimization problems. But this has a serious problem that the derived algorithm has huge constant factor. In this work, we redene this problem on recursive data structures as a maximum marking problem and propose method for deriving a linear time algorithm for that. In this method, speci cation is given using recursive functions instead of logical formula, which results in a practical linear time algorithm. This method is mechanical and in fact, based on this deriving method, we make a system which automatically generates a practical linear time algorithm from specication for a maximum marking problem.
The Third Homomorphism Theorem on Trees Downward & Upward Lead to DivideandConquer
"... Parallel programs on lists have been intensively studied. It is well known that associativity provides a good characterization for divideandconquer parallel programs. In particular, the third homomorphism theorem is not only useful for systematic development of parallel programs on lists, but it i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Parallel programs on lists have been intensively studied. It is well known that associativity provides a good characterization for divideandconquer parallel programs. In particular, the third homomorphism theorem is not only useful for systematic development of parallel programs on lists, but it is also suitable for automatic parallelization. The theorem states that if two sequential programs iterate the same list leftward and rightward, respectively, and compute the same value, then there exists a divideandconquer parallel program that computes the same value as the sequential programs. While there have been many studies on lists, few have been done for characterizing and developing of parallel programs on trees. Naive divideandconquer programs, which divide a tree at the root and compute independent subtrees in parallel, take time that is proportional to the height of the input tree and have poor scalability with respect to the number of processors when the input tree is illbalanced. In this paper, we develop a method for systematically constructing scalable divideandconquer parallel programs on trees, in which two sequential programs lead to a scalable divideandconquer parallel program. We focus on paths instead of trees so as to utilize rich results on lists and demonstrate that associativity provides good characterization for scalable divideandconquer parallel programs on trees. Moreover, we generalize the third homomorphism theorem from lists to trees. We demonstrate the effectiveness of our method with various examples. Our results, being generalizations of known results for lists, are generic in the sense that they work well for all polynomial data structures.