Results 1  10
of
43
Domain Theory
 Handbook of Logic in Computer Science
, 1994
"... Least fixpoints as meanings of recursive definitions. ..."
Abstract

Cited by 461 (20 self)
 Add to MetaCart
Least fixpoints as meanings of recursive definitions.
Enhanced IterativeDeepening Search
, 1993
"... Iterativedeepening searches mimic a breadthfirst node expansion with a series of depthfirst searches that operate with successively extended search horizons. They have been proposed as a simple way to reduce the space complexity of bestfirst searches like A* from exponential to linear in the sea ..."
Abstract

Cited by 70 (4 self)
 Add to MetaCart
Iterativedeepening searches mimic a breadthfirst node expansion with a series of depthfirst searches that operate with successively extended search horizons. They have been proposed as a simple way to reduce the space complexity of bestfirst searches like A* from exponential to linear in the search depth. But there
Ten Years of Hoare's Logic: A Survey  Part l
, 1981
"... A survey of various results concerning Hoare's approach to proving partial and total correctness of programs is presented. Emphasis is placed on the soundness and completeness issues. Various proof systems for while programs, recursive procedures, local variable declarations, and procedures with par ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
A survey of various results concerning Hoare's approach to proving partial and total correctness of programs is presented. Emphasis is placed on the soundness and completeness issues. Various proof systems for while programs, recursive procedures, local variable declarations, and procedures with parameters, together with the corresponding soundness, completeness, and incompleteness results, are discussed.
Proving Concurrent Constraint Programs Correct
, 1994
"... We develop a compositional proofsystem for the partial correctness of concurrent constraint programs. Soundness and (relative) completeness of the system are proved with respect to a denotational semantics based on the notion of strongest postcondition. The strongest postcondition semantics provide ..."
Abstract

Cited by 59 (14 self)
 Add to MetaCart
We develop a compositional proofsystem for the partial correctness of concurrent constraint programs. Soundness and (relative) completeness of the system are proved with respect to a denotational semantics based on the notion of strongest postcondition. The strongest postcondition semantics provides a justification of the declarative nature of concurrent constraint programs, since it allows to view programs as theories in the specification logic. 1 Introduction Concurrent constraint programming ([24, 25, 26]) (ccp, for short) is a concurrent programming paradigm which derives from replacing the storeasvaluation conception of von Neumann computing by the storeas constraint model. Its computational model is based on a global store, represented by a constraint, which expresses some partial information on the values of the variables involved in the computation. The concurrent execution of different processes, which interact through the common store, refines the partial information of...
A Comparative Study of Symbolic Algorithms for the Computation of Fair Cycles
"... Detection of fair cycles is an important task of many model checking algorithms. When the transition system is represented symbolically, the standard approach to fair cycle detection is the one of Emerson and Lei. In the last decade variants of this algorithm and an alternative method based on stron ..."
Abstract

Cited by 34 (7 self)
 Add to MetaCart
Detection of fair cycles is an important task of many model checking algorithms. When the transition system is represented symbolically, the standard approach to fair cycle detection is the one of Emerson and Lei. In the last decade variants of this algorithm and an alternative method based on strongly connected component decomposition have been proposed. We present a taxonomy of these techniques and compare representatives of each major class on a collection of reallife examples. Our results indicate that the EmersonLei procedure is the fastest, but other algorithms tend to generate shorter counterexamples.
APHID: Asynchronous Parallel GameTree Search
, 1999
"... Most parallel gametree search approaches use synchronous methods, where the work is concentrated within a specific part of the tree, or at a given search depth. This article shows that asynchronous gametree search algorithms can be as efficient as or better than synchronous methods in determini ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
Most parallel gametree search approaches use synchronous methods, where the work is concentrated within a specific part of the tree, or at a given search depth. This article shows that asynchronous gametree search algorithms can be as efficient as or better than synchronous methods in determining the minimax value. APHID, a new asynchronous parallel gametree search algorithm, is presented. APHID is implemented as a freelyavailable portable library, making the algorithm easy to integrate into a sequential gametree searching program. APHID has been added to four programs written by different authors. APHID yields better speedups than synchronous search methods for an Othello and a checkers program, and comparable speedups on two chess programs.
Equational axioms for probabilistic bisimilarity
 IN PROCEEDINGS OF 9TH AMAST, LECTURE NOTES IN COMPUTER SCIENCE
, 2002
"... This paper gives an equational axiomatization of probabilistic bisimulation equivalence for a class of finitestate agents previously studied by Stark and Smolka ((2000) Proof, Language, and Interaction: Essays in Honour of Robin Milner, pp. 571595). The axiomatization is obtained by extending ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
This paper gives an equational axiomatization of probabilistic bisimulation equivalence for a class of finitestate agents previously studied by Stark and Smolka ((2000) Proof, Language, and Interaction: Essays in Honour of Robin Milner, pp. 571595). The axiomatization is obtained by extending the general axioms of iteration theories (or iteration algebras), which characterize the equational properties of the fixed point operator on (#)continuous or monotonic functions, with three axiom schemas that express laws that are specific to probabilistic bisimilarity.
The Early Search for Tractable Ways of Reasoning About Programs
 IEEE Annals of the History of Computing
, 2003
"... This paper traces the important steps in the history up to around 1990 of research on reasoning about programs. The main focus is on sequential imperative programs but some comments are made on concurrency. Initially, researchers focussed on ways of verifying that a program satisfies its specifi ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
This paper traces the important steps in the history up to around 1990 of research on reasoning about programs. The main focus is on sequential imperative programs but some comments are made on concurrency. Initially, researchers focussed on ways of verifying that a program satisfies its specification (or that two programs were equivalent). Over time it became clear that post facto verification is only practical for small programs and attention turned to verification methods which support the development of programs; for larger programs it is necessary to exploit a notation of compositionality. Coping with concurrent algorithms is much more challenging  this and other extensions are considered briefly. The main thesis of this paper is that the idea of reasoning about programs has been around since they were first written; the search has been to find tractable methods.
On the Search for Tractable Ways of Reasoning about Programs
, 2001
"... This paper traces the important steps in the history up to around 1990 of research on reasoning about programs. The main focus is on sequential imperative programs but some comments are made on concurrency. Initially, researchers focussed on ways of verifying that a program satifies its specific ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This paper traces the important steps in the history up to around 1990 of research on reasoning about programs. The main focus is on sequential imperative programs but some comments are made on concurrency. Initially, researchers focussed on ways of verifying that a program satifies its specification (or that two programs were equivalent). Over time it has become clear that post facto verification is only practical for small programs and attention turned to verification methods which support the development of programs; for larger programs it is necesary to exploit a notion of composability.