Results 1  10
of
11
P.: A discipline of dynamic programming over sequence data
 Science of Computer Programming
, 2004
"... Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has hitherto been available to guide the design of such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, errorprone to implement, and, in nontrivial applications, almost impossible to debug completely. This article introduces a discipline designed to alleviate this problem. We describe an algebraic style of dynamic programming over sequence data. We define its formal framework, based on a combination of grammars and algebras, and including a formalization of Bellman’s Principle. We suggest a language used for algorithm design on a convenient level of abstraction. We outline three ways of implementing this language, including an embedding in a lazy functional language. The workings of the
Algebraic dynamic programming
 Algebraic Methodology And Software Technology, 9th International Conference, AMAST 2002
, 2002
"... Abstract. Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing with ambiguous grammars, or biosequence analysis. Yet, no methodology is avai ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract. Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing with ambiguous grammars, or biosequence analysis. Yet, no methodology is available for designing such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, errorprone to implement, and almost impossible to debug. This article introduces an algebraic style of dynamic programming over sequence data. We define the formal framework including a formalization of Bellman’s principle, specify an executable specification language, and show how algorithm design decisions and tuning for efficiency can be described on a convenient level of abstraction.
Preferencebased search in state space graphs
 In Proceedings of AAAI02
, 2002
"... The aim of this paper is to introduce a general framework for preferencebased search in state space graphs with a focus on the search of the preferred solutions. After introducing a formal definition of preferencebased search problems, we introduce the PBA ∗ algorithm, a generalization of the A ∗ ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The aim of this paper is to introduce a general framework for preferencebased search in state space graphs with a focus on the search of the preferred solutions. After introducing a formal definition of preferencebased search problems, we introduce the PBA ∗ algorithm, a generalization of the A ∗ algorithm, designed to process quasitransitive preference relations defined over the set of solutions. Then, considering a particular subclass of preference structures characterized by two axioms called Weak Preadditivity and Monotonicity, weestablish termination, completeness and admissibility results for PBA ∗.Wealsoshowthat previous generalizations of A ∗ are particular instances of PBA ∗.Theinterest of our algorithm is illustrated on a preferencebased web access problem.
A Relational Approach To Optimization Problems
, 1996
"... The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming s ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The main contribution of this thesis is a study of the dynamic programming and greedy strategies for solving combinatorial optimization problems. The study is carried out in the context of a calculus of relations, and generalises previous work by using a loop operator in the imperative programming style for generating feasible solutions, rather than the fold and unfold operators of the functional programming style. The relationship between fold operators and loop operators is explored, and it is shown how to convert from the former to the latter. This fresh approach provides additional insights into the relationship between dynamic programming and greedy algorithms, and helps to unify previously distinct approaches to solving combinatorial optimization problems. Some of the solutions discovered are new and solve problems which had previously proved difficult. The material is illustrated with a selection of problems and solutions that is a mixture of old and new. Another contribution is the invention of a new calculus, called the graph calculus, which is a useful tool for reasoning in the relational calculus and other nonrelational calculi. The graph
Dynamic Programming: a different perspective
 Algorithmic Languages and Calculi
, 1997
"... Dynamic programming has long been used as an algorithm design technique, with various mathematical theories proposed to model it. Here we take a different perspective, using a relational calculus to model the problems and solutions using dynamic programming. This approach serves to shed new light on ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Dynamic programming has long been used as an algorithm design technique, with various mathematical theories proposed to model it. Here we take a different perspective, using a relational calculus to model the problems and solutions using dynamic programming. This approach serves to shed new light on the different styles of dynamic programming, representing them by different search strategies of the treelike space of partial solutions. 1 INTRODUCTION AND HISTORY Dynamic programming is an algorithm design technique for solving many different types of optimization problem, applicable to such diverse fields as operations research (Ecker and Kupferschmid, 1988) and neutron transport theory (Bellman, Kagiwada and Kalaba, 1967). The mathematical theory of the subject dates back to 1957, when Richard Bellman (Bellman, 1957) first popularized the idea, producing a mathematical theory to model multistage decision processes and to solve related optimization problems. He was also the first to i...
Implementing algebraic dynamic programming in the functional and the imperative paradigm
 In E.A. Boiten and B. Möller, editors, Mathematics of Program Construction
, 2002
"... Abstract. Algebraic dynamic programming is a new method for developing and reasoning about dynamic programming algorithms. In this approach, socalled yield grammars and evaluation algebras constitute abstract specifications of dynamic programming algorithms. We describe how this theory is put to pr ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. Algebraic dynamic programming is a new method for developing and reasoning about dynamic programming algorithms. In this approach, socalled yield grammars and evaluation algebras constitute abstract specifications of dynamic programming algorithms. We describe how this theory is put to practice by providing a specification language that can both be embedded in a lazy functional language, and translated into an imperative language. Parts of the analysis required for the latter translation also gives rise to sourcetosource transformations that improve the asymptotic efficiency of the functional implementation. The multiparadigm system resulting from this approach provides increased programming productivity and effective validation. 1
Towards a Discipline of Dynamic Programming
"... Abstract. Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing ambiguous languages, or biosequence analysis. Yet, heretofore no methodology ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing ambiguous languages, or biosequence analysis. Yet, heretofore no methodology was available guiding the design of such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, errorprone to implement, and almost impossible to debug. This article introduces an algebraic style of dynamic programming over sequence data. We define its formal framework including a formalization of Bellman’s principle. We suggest a language for algorithm design on a convenient level of abstraction. We outline three ways of implementation, including an embedding in a lazy functional language. The workings of the new method are illustrated by a series of examples from diverse areas of computer science.
RankDependent Probability Weighting in Sequential Decision Problems under Uncertainty
"... This paper is devoted to the computation of optimal strategies in automated sequential decision problems. We consider here problems where one seeks a strategy which is optimal for rank dependent utility (RDU). RDU generalizes von Neumann and Morgenstern’s expected utility (by probability weighting) ..."
Abstract
 Add to MetaCart
This paper is devoted to the computation of optimal strategies in automated sequential decision problems. We consider here problems where one seeks a strategy which is optimal for rank dependent utility (RDU). RDU generalizes von Neumann and Morgenstern’s expected utility (by probability weighting) to encompass rational decision behaviors that EU cannot accomodate. The induced algorithmic problem is however more difficult to solve since the optimality principle does not hold anymore. More crucially, we prove here that the search for an optimal strategy (w.r.t. RDU) in a decision tree is an NPhard problem. We propose an implicit enumeration algorithm to compute optimal rank dependent utility in decision trees. The performances of our algorithm on randomly generated instances and realworld instances of different sizes are presented and discussed.
Proceedings of the Eighteenth International Conference on Automated Planning and Scheduling (ICAPS 2008) RankDependent Probability Weighting in Sequential Decision Problems under Uncertainty
"... This paper is devoted to the computation of optimal strategies in automated sequential decision problems. We consider here problems where one seeks a strategy which is optimal for rank dependent utility (RDU). RDU generalizes von Neumann and Morgenstern’s expected utility (by probability weighting) ..."
Abstract
 Add to MetaCart
This paper is devoted to the computation of optimal strategies in automated sequential decision problems. We consider here problems where one seeks a strategy which is optimal for rank dependent utility (RDU). RDU generalizes von Neumann and Morgenstern’s expected utility (by probability weighting) to encompass rational decision behaviors that EU cannot accomodate. The induced algorithmic problem is however more difficult to solve since the optimality principle does not hold anymore. More crucially, we prove here that the search for an optimal strategy (w.r.t. RDU) in a decision tree is an NPhard problem. We propose an implicit enumeration algorithm to compute optimal rank dependent utility in decision trees. The performances of our algorithm on randomly generated instances and realworld instances of different sizes are presented and discussed.
Pair Evaluation Algebras in Dynamic Programming
"... Abstract. Dynamic programming solves combinatorial optimization problems by recursive decomposition and tabulation of intermediate results. The recently developed discipline of algebraic dynamic programming (ADP) helps to make program development and implementation in nontrivial applications much mo ..."
Abstract
 Add to MetaCart
Abstract. Dynamic programming solves combinatorial optimization problems by recursive decomposition and tabulation of intermediate results. The recently developed discipline of algebraic dynamic programming (ADP) helps to make program development and implementation in nontrivial applications much more effective. It raises dynamic programming to a declarative level of abstraction, separates the search space definition from its evaluation, and thus yields more reliable and versatile algorithms than the traditional dynamic programming recurrences. Here we extend this discipline by a pairing operation on evaluation algebras, whose clue lies with an asymmetric combination of two different optimization objectives. This leads to a surprising variety of applications without additional programming effort.