Results 1  10
of
14
P.: A discipline of dynamic programming over sequence data
 Science of Computer Programming
, 2004
"... Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has hitherto been available to guide the design of such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, errorprone to implement, and, in nontrivial applications, almost impossible to debug completely. This article introduces a discipline designed to alleviate this problem. We describe an algebraic style of dynamic programming over sequence data. We define its formal framework, based on a combination of grammars and algebras, and including a formalization of Bellman’s Principle. We suggest a language used for algorithm design on a convenient level of abstraction. We outline three ways of implementing this language, including an embedding in a lazy functional language. The workings of the
Dynamic programming via static incrementalization
 In Proceedings of the 8th European Symposium on Programming
, 1999
"... Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dyn ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes polynomial time. This paper describes a systematic method for transforming programs written as straightforward recursions into programs that use dynamic programming. The method extends the original program to cache all possibly computed values, incrementalizes the extended program with respect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting incremental program to form an optimized new program. Incrementalization statically exploits semantics of both control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achieving drastic program speedups. Compared with previous methods that perform memoization or tabulation, the method based on incrementalization is more powerful and systematic. It has been implemented and applied to numerous problems and succeeded on all of them. 1
Universal regular path queries
 HigherOrder and Symbolic Computation
, 2003
"... Given are a directed edgelabelled graph G with a distinguished node n0, and a regular expression P which may contain variables. We wish to compute all substitutions φ (of symbols for variables), together with all nodes n such that all paths n0 → n are in φ(P). We derive an algorithm for this proble ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Given are a directed edgelabelled graph G with a distinguished node n0, and a regular expression P which may contain variables. We wish to compute all substitutions φ (of symbols for variables), together with all nodes n such that all paths n0 → n are in φ(P). We derive an algorithm for this problem using relational algebra, and show how it may be implemented in Prolog. The motivation for the problem derives from a declarative framework for specifying compiler optimisations. 1 Bob Paige and IFIP WG 2.1 Bob Paige was a longstanding member of IFIP Working Group 2.1 on Algorithmic Languages and Calculi. In recent years, the main aim of this group has been to investigate the derivation of algorithms from specifications by program transformation. Already in the mideighties, Bob was way ahead of the pack: instead of applying transformational techniques to wellworn examples, he was applying his theories of program transformation to new problems, and discovering new algorithms [16, 48, 52]. The secret of his success lay partly in his insistence on the study of general algorithm design strategies (in particular
Algebraic dynamic programming
 Algebraic Methodology And Software Technology, 9th International Conference, AMAST 2002
, 2002
"... Abstract. Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing with ambiguous grammars, or biosequence analysis. Yet, no methodology is avai ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract. Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing with ambiguous grammars, or biosequence analysis. Yet, no methodology is available for designing such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, errorprone to implement, and almost impossible to debug. This article introduces an algebraic style of dynamic programming over sequence data. We define the formal framework including a formalization of Bellman’s principle, specify an executable specification language, and show how algorithm design decisions and tuning for efficiency can be described on a convenient level of abstraction.
A Relational Approach To Optimization Problems
, 1996
"... The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming s ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming style for generating feasible solutions, rather than the fold and unfold operators of the functional programming style. The relationship between fold operators and loop operators is explored, and it is shown how to convert from the former to the latter. This fresh approach provides additional insights into the relationship between dynamic programming and greedy algorithms, and helps to unify previously distinct approaches to solving combinatorial optimization problems. Some of the solutions discovered are new and solve problems which had previously proved difficult. The material is illustrated with a selection of problems and solutions that is a mixture of old and new. Another contribution is the invention of a new calculus, called the graph calculus, which is a useful tool for reasoning in the relational calculus and other nonrelational calculi. The graph
Automatic diagnosis of performance problems in database management systems
 In ICAC ’05: Proceedings of the Second International Conference on Automatic Computing. IEEE Computer Society
, 2003
"... Database performance is directly linked to the allocation of the resources used by the Database Management System (DBMS). The complex relationships between numerous DBMS resources make problem diagnosis and performance tuning complex and timeconsuming tasks. Costly Database Administrators (DBAs) ar ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Database performance is directly linked to the allocation of the resources used by the Database Management System (DBMS). The complex relationships between numerous DBMS resources make problem diagnosis and performance tuning complex and timeconsuming tasks. Costly Database Administrators (DBAs) are currently needed to initially tune a DBMS for performance and then to retune the DBMS as the database grows and workloads change. Automatic diagnosis and resource management removes the need for DBAs, greatly reducing the cost of ownership for the DBMS. An automated system also allows the DBMS to respond more quickly to changes in the workload as performance can be monitored 24 hours a day. An automated diagnosis and resource management system allows the DBMS to improve performance for both static and dynamic workloads. One of the key issues in automatic resource management is the capability of the system to diagnose resource problems. Diagnosis of the resource allocation problem is the first step in the process of tuning the resources. In this dissertation, we propose an automatic
Between Dynamic Programming and Greedy: Data Compression
 Programming Research Group, 11 Keble Road, Oxford OX1 3QD
, 1995
"... The derivation of certain algorithms can be seen as a hybrid form of dynamic programming and the greedy paradigm. We present a generic theorem about such algorithms, and show how it can be applied to the derivation of an algorithm for data compression. 1 Introduction Dynamic programming is a techni ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The derivation of certain algorithms can be seen as a hybrid form of dynamic programming and the greedy paradigm. We present a generic theorem about such algorithms, and show how it can be applied to the derivation of an algorithm for data compression. 1 Introduction Dynamic programming is a technique for solving optimisation problems. A typical dynamic programming algorithm proceeds by decomposing the input in all possible ways, recursively solving the subproblems, and combining optimal solutions to subproblems into an optimal solution for the whole problem. The greedy paradigm is also a technique for solving optimisation problems and differs from dynamic programming in that only one decomposition of the input is considered. Such a decomposition is usually chosen to maximise some objective function, and this explains the term `greedy'. In our earlier work, we have characterised the use of dynamic programming and the greedy paradigm, using the categorical calculus of relations to der...
Canonical greedy algorithms and dynamic programming by
"... Abstract: There has been little work on how to construct greedy algorithms to solve new optimization problems efficiently. Instead, greedy algorithms have generally been designed on an ad hoc basis. On the other hand, dynamic programming has a long history of being a useful tool for solving optimiza ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract: There has been little work on how to construct greedy algorithms to solve new optimization problems efficiently. Instead, greedy algorithms have generally been designed on an ad hoc basis. On the other hand, dynamic programming has a long history of being a useful tool for solving optimization problems, but is often inefficient. We show how dynamic programming can be used to derive efficient greedy algorithms that are optimal for a wide variety of problems. This approach also provides a way to obtain less efficient but optimal solutions to problems where derived greedy algorithms are nonoptimal.
Towards a discipline of dynamic programming
, 2002
"... frobertcmeyerpsteffengtechfakunibielefeldde Abstract Dynamic programming is a classic programming technique applicable in a wide variety of domains like stochastic systems analysis operations research combinatorics of discrete structures ow problems parsing ambiguous languages or biosequence an ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
frobertcmeyerpsteffengtechfakunibielefeldde Abstract Dynamic programming is a classic programming technique applicable in a wide variety of domains like stochastic systems analysis operations research combinatorics of discrete structures ow problems parsing ambiguous languages or biosequence analysis Yet heretofore no methodology was available guiding the design of such algorithms The matrix recurrences that typically describe a dynamic programming algorithm are dicult to construct error prone to implement and almost impossible to debug This article introduces an algebraic style of dynamic programming over sequence data We de ne its formal framework including a formalization of Bellmans principle We suggest a language for algorithm design on a convenient level of abstraction We outline three ways of implementation including an embedding in a lazy functional language The workings of the new method are illustrated by a series of examples from diverse areas of computer science The power and scope of dynamic programming Dynamic programming a world without rules Dynamic programming DP is one of the classic programming paradigms in troduced even before the term Computer Science was rmly established When applicable DP often allows to solve combinatorial optimization problems over a search space of exponential size in polynomial space and time Bellmans Prin ciple of Optimality Bel belongs to the core knowledge we expect from every
The Greedy Algorithms Class: Formalization, Synthesis and Generalization
, 1995
"... On the first hand, this report studies the class of Greedy Algorithms in order to find an as systematic as possible strategy that could be applied to the specification of some problems to lead to a correct program solving that problem. On the other hand, the standard formalisms underlying the G ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
On the first hand, this report studies the class of Greedy Algorithms in order to find an as systematic as possible strategy that could be applied to the specification of some problems to lead to a correct program solving that problem. On the other hand, the standard formalisms underlying the Greedy Algorithms (matroid, greedoid and matroid embedding) which are dependent on the particular type set are generalized to a formalism independent of any data type based on an algebraic specification setting.