Results 1 
8 of
8
From Dynamic Programming to Greedy Algorithms
 Formal Program Development, volume 755 of Lecture Notes in Computer Science
, 1992
"... A calculus of relations is used to reason about specifications and algorithms for optimisation problems. It is shown how certain greedy algorithms can be seen as refinements of dynamic programming. Throughout, the maximum lateness problem is used as a motivating example. 1 Introduction An optimisat ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
A calculus of relations is used to reason about specifications and algorithms for optimisation problems. It is shown how certain greedy algorithms can be seen as refinements of dynamic programming. Throughout, the maximum lateness problem is used as a motivating example. 1 Introduction An optimisation problem can be solved by dynamic programming if an optimal solution is composed of optimal solutions to subproblems. This property, which is known as the principle of optimality, can be formalised as a monotonicity condition. If the principle of optimality is satisfied, one can compute a solution by decomposing the input in all possible ways, recursively solving the subproblems, and then combining optimal solutions to subproblems into an optimal solution for the whole problem. By contrast, a greedy algorithm considers only one decomposition of the argument. This decomposition is usually unbalanced, and greedy in the sense that at each step the algorithm reduces the input as much as poss...
Universal regular path queries
 HigherOrder and Symbolic Computation
, 2003
"... Given are a directed edgelabelled graph G with a distinguished node n0, and a regular expression P which may contain variables. We wish to compute all substitutions φ (of symbols for variables), together with all nodes n such that all paths n0 → n are in φ(P). We derive an algorithm for this proble ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Given are a directed edgelabelled graph G with a distinguished node n0, and a regular expression P which may contain variables. We wish to compute all substitutions φ (of symbols for variables), together with all nodes n such that all paths n0 → n are in φ(P). We derive an algorithm for this problem using relational algebra, and show how it may be implemented in Prolog. The motivation for the problem derives from a declarative framework for specifying compiler optimisations. 1 Bob Paige and IFIP WG 2.1 Bob Paige was a longstanding member of IFIP Working Group 2.1 on Algorithmic Languages and Calculi. In recent years, the main aim of this group has been to investigate the derivation of algorithms from specifications by program transformation. Already in the mideighties, Bob was way ahead of the pack: instead of applying transformational techniques to wellworn examples, he was applying his theories of program transformation to new problems, and discovering new algorithms [16, 48, 52]. The secret of his success lay partly in his insistence on the study of general algorithm design strategies (in particular
Solving Optimisation Problems with Catamorphisms
, 1992
"... . This paper contributes to an ongoing effort to construct a calculus for deriving programs for optimisation problems. The calculus is built around the notion of initial data types and catamorphisms which are homomorphisms on initial data types. It is shown how certain optimisation problems, which a ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
. This paper contributes to an ongoing effort to construct a calculus for deriving programs for optimisation problems. The calculus is built around the notion of initial data types and catamorphisms which are homomorphisms on initial data types. It is shown how certain optimisation problems, which are specified in terms of a relational catamorphism, can be solved by means of a functional catamorphism. The result is illustrated with a derivation of Kruskal's algorithm for finding a minimum spanning tree in a connected graph. 1 Introduction Efficient algorithms for solving optimisation problems can sometimes be expressed as homomorphisms on initial data types. Such homomorphisms, which correspond to the familiar fold operators in functional programming, are called catamorphisms. In this paper, we give conditions under which an optimisation problem can be solved by a catamorphism. Our results are a natural generalisation of earlier work by Jeuring [5, 6], who considered the same problem ...
Generic Programming With Relations and Functors
 Journal of Functional Programming
, 1999
"... This paper explores the idea of generic programming in which programs are parameterised by data types. Part of the constructive theory of lists, specically the part dealing with properties of segments, is generalised in two ways: from lists to arbitrary inductive data types, and from functions to ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
This paper explores the idea of generic programming in which programs are parameterised by data types. Part of the constructive theory of lists, specically the part dealing with properties of segments, is generalised in two ways: from lists to arbitrary inductive data types, and from functions to relations. The new theory is used to solve a generic problem about segments. 1 Introduction To what extent is it possible to construct programs without knowing exactly what data types are involved? At rst sight this may seem a strange question, but consider the case of pattern matching. Over lists, this problem can be formulated in terms of two strings, a pattern and a text; the object is to determine if and where the pattern occurs as a segment of the text. Now, pattern matching can be generalised to other data types, including arrays and trees of various kinds; the essential step is to be able to dene the notion of `segment' in these types. So the intriguing question arises: can one...
A Relational Approach To Optimization Problems
, 1996
"... The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming s ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming style for generating feasible solutions, rather than the fold and unfold operators of the functional programming style. The relationship between fold operators and loop operators is explored, and it is shown how to convert from the former to the latter. This fresh approach provides additional insights into the relationship between dynamic programming and greedy algorithms, and helps to unify previously distinct approaches to solving combinatorial optimization problems. Some of the solutions discovered are new and solve problems which had previously proved difficult. The material is illustrated with a selection of problems and solutions that is a mixture of old and new. Another contribution is the invention of a new calculus, called the graph calculus, which is a useful tool for reasoning in the relational calculus and other nonrelational calculi. The graph
Dynamic Programming: a different perspective
 Algorithmic Languages and Calculi
, 1997
"... Dynamic programming has long been used as an algorithm design technique, with various mathematical theories proposed to model it. Here we take a different perspective, using a relational calculus to model the problems and solutions using dynamic programming. This approach serves to shed new light on ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Dynamic programming has long been used as an algorithm design technique, with various mathematical theories proposed to model it. Here we take a different perspective, using a relational calculus to model the problems and solutions using dynamic programming. This approach serves to shed new light on the different styles of dynamic programming, representing them by different search strategies of the treelike space of partial solutions. 1 INTRODUCTION AND HISTORY Dynamic programming is an algorithm design technique for solving many different types of optimization problem, applicable to such diverse fields as operations research (Ecker and Kupferschmid, 1988) and neutron transport theory (Bellman, Kagiwada and Kalaba, 1967). The mathematical theory of the subject dates back to 1957, when Richard Bellman (Bellman, 1957) first popularized the idea, producing a mathematical theory to model multistage decision processes and to solve related optimization problems. He was also the first to i...
Between Dynamic Programming and Greedy: Data Compression
 Programming Research Group, 11 Keble Road, Oxford OX1 3QD
, 1995
"... The derivation of certain algorithms can be seen as a hybrid form of dynamic programming and the greedy paradigm. We present a generic theorem about such algorithms, and show how it can be applied to the derivation of an algorithm for data compression. 1 Introduction Dynamic programming is a techni ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The derivation of certain algorithms can be seen as a hybrid form of dynamic programming and the greedy paradigm. We present a generic theorem about such algorithms, and show how it can be applied to the derivation of an algorithm for data compression. 1 Introduction Dynamic programming is a technique for solving optimisation problems. A typical dynamic programming algorithm proceeds by decomposing the input in all possible ways, recursively solving the subproblems, and combining optimal solutions to subproblems into an optimal solution for the whole problem. The greedy paradigm is also a technique for solving optimisation problems and differs from dynamic programming in that only one decomposition of the input is considered. Such a decomposition is usually chosen to maximise some objective function, and this explains the term `greedy'. In our earlier work, we have characterised the use of dynamic programming and the greedy paradigm, using the categorical calculus of relations to der...
A Relational Derivation of a Functional Program
 In Proc. STOP Summer School on Constructive Algorithmics, Ameland, The
, 1992
"... This article is an introduction to the use of relational calculi in deriving programs. We present a derivation in a relational language of a functional program that adds one bit to a binary number. The resulting program is unsurprising, being the standard `column of halfadders', but the derivation ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This article is an introduction to the use of relational calculi in deriving programs. We present a derivation in a relational language of a functional program that adds one bit to a binary number. The resulting program is unsurprising, being the standard `column of halfadders', but the derivation illustrates a number of points about working with relations rather than functions. 1 Ruby Our derivation is made within the relational calculi developed by Jones and Sheeran [14, 15]. Their language, called Ruby , is designed specifically for the derivation of `hardwarelike' programs that denote finite networks of simple primitives. Ruby has been used to derive a number of different kinds of hardwarelike programs [13, 22, 23, 16]. Programs in Ruby are built piecewise from smaller programs using a simple set of combining forms. Ruby is not meant as a programming language in its own right, but as a tool for developing and explaining algorithms. Fundamental to Ruby is the use of terse not...