Results 1  10
of
17
Foundations for structured programming with GADTs
 Conference record of the ACM SIGPLANSIGACT Symposium on Principles of Programming Languages
, 2008
"... GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we solve this problem by showing that the standard theory of data types as carriers of initial algebras of fun ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we solve this problem by showing that the standard theory of data types as carriers of initial algebras of functors can be extended from algebraic and nested data types to GADTs. We then use this observation to derive an initial algebra semantics for GADTs, thus ensuring that all of the accumulated knowledge about initial algebras can be brought to bear on them. Next, we use our initial algebra semantics for GADTs to derive expressive and principled tools — analogous to the wellknown and widelyused ones for algebraic and nested data types — for reasoning about, programming with, and improving the performance of programs involving, GADTs; we christen such a collection of tools for a GADT an initial algebra package. Along the way, we give a constructive demonstration that every GADT can be reduced to one which uses only the equality GADT and existential quantification. Although other such reductions exist in the literature, ours is entirely local, is independent of any particular syntactic presentation of GADTs, and can be implemented in the host language, rather than existing solely as a metatheoretical artifact. The main technical ideas underlying our approach are (i) to modify the notion of a higherorder functor so that GADTs can be seen as carriers of initial algebras of higherorder functors, and (ii) to use left Kan extensions to trade arbitrary GADTs for simplerbutequivalent ones for which initial algebra semantics can be derived.
Initial algebra semantics is enough
 Proceedings, Typed Lambda Calculus and Applications
, 2007
"... Abstract. Initial algebra semantics is a cornerstone of the theory of modern functional programming languages. For each inductive data type, it provides a fold combinator encapsulating structured recursion over data of that type, a Church encoding, a build combinator which constructs data of that ty ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. Initial algebra semantics is a cornerstone of the theory of modern functional programming languages. For each inductive data type, it provides a fold combinator encapsulating structured recursion over data of that type, a Church encoding, a build combinator which constructs data of that type, and a fold/build rule which optimises modular programs by eliminating intermediate data of that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types. Specifically, the folds have been considered too weak to capture commonly occurring patterns of recursion, and no Church encodings, build combinators, or fold/build rules have been given for nested types. This paper overturns this conventional wisdom by solving all of these problems. 1
Asymptotic Improvement of Computations over Free Monads
 In Proceedings, Mathematics of Program Construction
, 2008
"... Abstract. We present a loweffort program transformation to improve the efficiency of computations over free monads in Haskell. The development is calculational and carried out in a generic setting, thus applying to a variety of datatypes. An important aspect of our approach is the utilisation of ty ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present a loweffort program transformation to improve the efficiency of computations over free monads in Haskell. The development is calculational and carried out in a generic setting, thus applying to a variety of datatypes. An important aspect of our approach is the utilisation of type class mechanisms to make the transformation as transparent as possible, requiring no restructuring of code at all. There is also no extra support necessary from the compiler (apart from an uptodate type checker). Despite this simplicity of use, our technique is able to achieve true asymptotic runtime improvements. We demonstrate this by examples for which the complexity is reduced from quadratic to linear. 1
Proving correctness via free theorems: The case of the destroy/buildrule
 IN PARTIAL EVALUATION AND PROGRAM MANIPULATION, PROCEEDINGS
, 2008
"... Free theorems feature prominently in the field of program transformation for pure functional languages such as Haskell. However, somewhat disappointingly, the semantic properties of so based transformations are often established only very superficially. This paper is intended as a case study showing ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Free theorems feature prominently in the field of program transformation for pure functional languages such as Haskell. However, somewhat disappointingly, the semantic properties of so based transformations are often established only very superficially. This paper is intended as a case study showing how to use the existing theoretical foundations and formal methods for improving the situation. To that end, we investigate the correctness issue for a new transformation rule in the short cut fusion family. This destroy/buildrule provides a certain reconciliation between the competing foldr/build and destroy/unfoldrapproaches to eliminating intermediate lists. Our emphasis is on systematically and rigorously developing the rule’s correctness proof, even while paying attention to semantic aspects like potential nontermination and mixed strict/nonstrict evaluation.
A family of syntactic logical relations for the semantics of Haskelllike languages
 INFORMATION AND COMPUTATION
, 2009
"... Logical relations are a fundamental and powerful tool for reasoning about programs in languages with parametric polymorphism. Logical relations suitable for reasoning about observational behavior in polymorphic calculi supporting various programming language features have been introduced in recent y ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Logical relations are a fundamental and powerful tool for reasoning about programs in languages with parametric polymorphism. Logical relations suitable for reasoning about observational behavior in polymorphic calculi supporting various programming language features have been introduced in recent years. Unfortunately, the calculi studied are typically idealized, and the results obtained for them offer only partial insight into the impact of such features on observational behavior in implemented languages. In this paper we show how to bring reasoning via logical relations closer to bear on real languages by deriving results that are more pertinent to an intermediate language for the (mostly) lazy functional language Haskell like GHC Core. To provide a more finegrained analysis of program behavior than is possible by reasoning about program equivalence alone, we work with an abstract notion of relating observational behavior of computations which has among its specializations both observational equivalence and observational approximation. We take selective strictness into account, and we consider the impact of different kinds of
Algebraic Fusion of Functions with an Accumulating Parameter and Its Improvement
 UNDER CONSIDERATION FOR PUBLICATION IN J. FUNCTIONAL PROGRAMMING
"... This paper develops a new framework for fusion that is designed for eliminating the intermediate data structures involved in the composition of functions that have one accumulating parameter. The new fusion framework comprises two steps: algebraic fusion and its subsequent improvement process. The k ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
This paper develops a new framework for fusion that is designed for eliminating the intermediate data structures involved in the composition of functions that have one accumulating parameter. The new fusion framework comprises two steps: algebraic fusion and its subsequent improvement process. The key idea in our development is to regard functions with an accumulating parameter as functions that operate over the monoid of data contexts. Algebraic fusion composes each such function with a monoid homomorphism that is derived from the definition of the consumer function to obtain a higherorder function that computes over the monoid of endofunctions. The transformation result may be further refined by an improvement process, which replaces the operation over the monoid of endofunctions (i.e., function closures) with another monoid operation over a monoid structure other than function closures. Using our framework, one can formulate a particular solution to the fusion problem by devising appropriate monoids and monoid homomorphisms. This provides a unified exposition of a variety of fusion methods that have been developed so far in different formalisms. Furthermore, the cleaner formulation makes it possible to argue about some delicate issues on a firm mathematical basis. We demonstrate that algebraic fusion and improvement in the world of CPOs and continuous functions can correctly fuse functions that operate on partial and infinite data structures. We also show that subtle differences in termination behaviors of transformed programs caused by certain different fusion methods can be cleanly explained by corresponding improvement processes that have different underlying monoid structures.
Short Cut Fusion of Recursive Programs with Computational Effects
 SYMPOSIUM ON TRENDS IN FUNCTIONAL PROGRAMMING 2008 (TFP'08)
, 2008
"... Fusion is the process of improving the efficiency of modularly constructed programs by transforming them into monolithic equivalents. This paper defines a generalization of the standardbuild combinator which expresses uniform production of functorial contexts containing data of inductive types. It a ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Fusion is the process of improving the efficiency of modularly constructed programs by transforming them into monolithic equivalents. This paper defines a generalization of the standardbuild combinator which expresses uniform production of functorial contexts containing data of inductive types. It also proves correct a fusion rule which generalizes thefold/build andfold/buildp rules from the literature, and eliminates intermediate data structures of inductive types without disturbing the contexts in which they are situated. An important special case arises when this context is monadic. When it is, a second rule for fusing combinations of producers and consumers via monad operations, rather than via composition, is also available. We give examples illustrating both rules, and consider their coalgebraic duals as well.
A foundation for GADTs and inductive families: dependent polynomial functor approach
 In WGP’11
, 2011
"... Every Algebraic Datatype (ADT) is characterised as the initial algebra of a polynomial functor on sets. This paper extends the characterisation to the case of more advanced datatypes: Generalised Algebraic Datatypes (GADTs) and Inductive Families. Specifically, we show that GADTs and Inductive Fa ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Every Algebraic Datatype (ADT) is characterised as the initial algebra of a polynomial functor on sets. This paper extends the characterisation to the case of more advanced datatypes: Generalised Algebraic Datatypes (GADTs) and Inductive Families. Specifically, we show that GADTs and Inductive Families are characterised as initial algebras of dependent polynomial functors. The theoretical tool we use throughout is an abstract notion of polynomial between sets together with its associated general form of polynomial functor between categories of indexed sets introduced by Gambino and Hyland. In the context of ADTs, this fundamental result is the basis for various generic functional programming techniques. To establish the usefulness of our approach for such developments in the broader context of inductively defined dependent types, we apply the theory to construct zippers for Inductive Families.
A.: Shortcut fusion of monadic programs
 J. of Univ. Comput. Sci
, 2008
"... Abstract: Functional programs often combine separate parts of the program using intermediate data structures for communicating results. Programs so defined are easier to understand and maintain, but suffer from inefficiency problems due to the generation of those data structures. In response to this ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: Functional programs often combine separate parts of the program using intermediate data structures for communicating results. Programs so defined are easier to understand and maintain, but suffer from inefficiency problems due to the generation of those data structures. In response to this problematic, some program transformation techniques have been studied with the aim to eliminate the intermediate data structures that arise in function compositions. One of these techniques is known as shortcut fusion. This technique has usually been studied in the context of purely functional programs. In this work we propose an extension of shortcut fusion that is able to eliminate intermediate data structures generated in the presence of monadic effects. The extension to be presented can be uniformly defined for a wide class of data types and monads.
Minimizing Monad Comprehensions
, 2011
"... Monad comprehensions are by now a mainstay of functional programming languages. In this paper we develop a theory of semantic optimization for monad comprehensions that goes beyond rewriting using the monad laws. A monadwithzero comprehension do x ← X; y ← Y; if P (x, y) then return F (x, y) else ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Monad comprehensions are by now a mainstay of functional programming languages. In this paper we develop a theory of semantic optimization for monad comprehensions that goes beyond rewriting using the monad laws. A monadwithzero comprehension do x ← X; y ← Y; if P (x, y) then return F (x, y) else zero can be rewritten, so as to minimize the number of ← bindings, using constraints that are known to hold of X and Y. The soundness of this technique varies from monad to monad, and we characterize its soundness for monads expressible in functional programming languages by generalizing classical results from relational database theory. This technique allows the optimization of a wide class of languages, ranging from largescale dataparallel languages such as DryadLINQ and Data Parallel Haskell to probabilistic languages such as IBAL and functionallogical languages like Curry.