Results 1  10
of
14
TypeIndexed Data Types
 SCIENCE OF COMPUTER PROGRAMMING
, 2004
"... A polytypic function is a function that can be instantiated on many data types to obtain data type specific functionality. Examples of polytypic functions are the functions that can be derived in Haskell, such as show , read , and ` '. More advanced examples are functions for digital searching, patt ..."
Abstract

Cited by 59 (21 self)
 Add to MetaCart
A polytypic function is a function that can be instantiated on many data types to obtain data type specific functionality. Examples of polytypic functions are the functions that can be derived in Haskell, such as show , read , and ` '. More advanced examples are functions for digital searching, pattern matching, unification, rewriting, and structure editing. For each of these problems, we not only have to define polytypic functionality, but also a typeindexed data type: a data type that is constructed in a generic way from an argument data type. For example, in the case of digital searching we have to define a search tree type by induction on the structure of the type of search keys. This paper shows how to define typeindexed data types, discusses several examples of typeindexed data types, and shows how to specialize typeindexed data types. The approach has been implemented in Generic Haskell, a generic programming extension of the functional language Haskell.
Generic Accumulations
, 2002
"... which are eventually used in later stages of the computation. We present a generic definition of accumulations, achieved by the introduction of a new recursive operator on inductive types. We also show that the notion of downwards accumulation developed by Gibbons is subsumed by our notion of acc ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
which are eventually used in later stages of the computation. We present a generic definition of accumulations, achieved by the introduction of a new recursive operator on inductive types. We also show that the notion of downwards accumulation developed by Gibbons is subsumed by our notion of accumulation.
Towards Merging Recursion and Comonads
, 2000
"... Comonads are mathematical structures that account naturally for effects that derive from the context in which a program is executed. This paper reports ongoing work on the interaction between recursion and comonads. Two applications are shown that naturally lead to versions of a comonadic fold op ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Comonads are mathematical structures that account naturally for effects that derive from the context in which a program is executed. This paper reports ongoing work on the interaction between recursion and comonads. Two applications are shown that naturally lead to versions of a comonadic fold operator on the product comonad. Both versions capture functions that require extra arguments for their computation and are related with the notion of strong datatype. 1 Introduction One of the main features of recursive operators derivable from datatype definitions is that they impose a structure upon programs which can be exploited for program transformation. Recursive operators structure functional programs according to the data structures they traverse or generate and come equipped with a battery of algebraic laws, also derivable from type definitions, which are used in program calculations [24, 11, 5, 15]. Some of these laws, the socalled fusion laws, are particularly interesting in p...
An Analytical Method For Parallelization Of Recursive Functions
, 2001
"... Programming with parallel skeletons is an attractive framework because it encourages programmers to develop efficient and portable parallel programs. However, extracting parallelism from sequential specifications and constructing efficient parallel programs using the skeletons are still difficult ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Programming with parallel skeletons is an attractive framework because it encourages programmers to develop efficient and portable parallel programs. However, extracting parallelism from sequential specifications and constructing efficient parallel programs using the skeletons are still difficult tasks. In this paper, we propose an analytical approach to transforming recursive functions on general recursive data structures into compositions of parallel skeletons. Using static slicing, we have defined a classification of subexpressions based on their dataparallelism. Then, skeletonbased parallel programs are generated from the classification. To extend the scope of parallelization, we have adopted more general parallel skeletons which do not require the associativity of argument functions. In this way, our analytical method can parallelize recursive functions with complex data flows. Keywords: data parallelism, parallelization, functional languages, parallel skeletons, data flow analysis, static slice 1.
Streaming RepresentationChangers
 LNCS
, 2004
"... Unfolds generate data structures, and folds consume them. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Unfolds generate data structures, and folds consume them.
Towards polytypic parallel programming
, 1998
"... Data parallelism is currently one of the most successful models for programming massively parallel computers. The central idea is to evaluate a uniform collection of data in parallel by simultaneously manipulating each data element in the collection. Despite many of its promising features, the curre ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Data parallelism is currently one of the most successful models for programming massively parallel computers. The central idea is to evaluate a uniform collection of data in parallel by simultaneously manipulating each data element in the collection. Despite many of its promising features, the current approach suffers from two problems. First, the main parallel data structures that most data parallel languages currently support are restricted to simple collection data types like lists, arrays or similar structures. But other useful data structures like trees have not been well addressed. Second, parallel programming relies on a set of parallel primitives that capture parallel skeletons of interest. However, these primitives are not well structured, and efficient parallel programming with these primitives is difficult. In this paper, we propose a polytypic framework for developing efficient parallel programs on most data structures. We showhow a set of polytypic parallel primitives can be formally defined for manipulating most data structures, how these primitives can be successfully structured into a uniform recursive definition, and how an efficient combination of primitives can be derived from a naive specification program. Our framework should be significant not only in development of new parallel algorithms, but also in construction of parallelizing compilers.
Generating generic functions
 In WGP ’06: Proceedings of the 2006 ACM SIGPLAN workshop on Generic programming
, 2006
"... www.cs.uu.nl ..."
Generic Accumulations for Program Calculation
, 2004
"... Accumulations are recursive functions widely used in the context of functional programming. They maintain intermediate results in additional parameters, called accumulators, that may be used in later stages of computing. In a former work [Par02] a generic recursion operator named afold was presented ..."
Abstract
 Add to MetaCart
Accumulations are recursive functions widely used in the context of functional programming. They maintain intermediate results in additional parameters, called accumulators, that may be used in later stages of computing. In a former work [Par02] a generic recursion operator named afold was presented. Afold makes it possible to write accumulations defined by structural recursion for a wide spectrum of datatypes (lists, trees, etc.). Also, a number of algebraic laws were provided that served as a formal tool for reasoning about programs with accumulations. In this work, we present an extension to afold that allows a greater flexibility in the kind of accumulations that may be represented. This extension, in essence, provides the expressive power to allow accumulations to have more than one recursive call in each subterm, with different accumulator values —something that was not previously possible. The extension is conservative, in the sense that we obtain similar algebraic laws for the extended operator. We also present a case study that illustrates the use of the algebraic laws in a calculational setting and a technique for the improvement of fused programs
Towards Merging Recursion and Comonads
, 2000
"... Comonads are mathematical structures that account naturally for effects that derive from the context in which a program is executed. This paper reports ongoing work on the interaction between recursion and comonads. Two applications are shown that naturally lead to versions of a comonadic fold op ..."
Abstract
 Add to MetaCart
Comonads are mathematical structures that account naturally for effects that derive from the context in which a program is executed. This paper reports ongoing work on the interaction between recursion and comonads. Two applications are shown that naturally lead to versions of a comonadic fold operator on the product comonad. Both versions capture functions that require extra arguments for their computation and are related with the notion of strong datatype. 1 Introduction One of the main features of recursive operators derivable from datatype definitions is that they impose a structure upon programs which can be exploited for program transformation. Recursive operators structure functional programs according to the data structures they traverse or generate and come equipped with a battery of algebraic laws, also derivable from type definitions, which are used in program calculations [24, 11, 5, 15]. Some of these laws, the socalled fusion laws, are particularly interesting in p...