Results 11  20
of
61
Probabilistic Domains
 in Proc. CAAP ’94, LNCS
, 1997
"... We show the equivalence of several different axiomatizations of the notion of (abstract) probabilistic domain in the category of dcpo's and continuous functions. The axiomatization with the richest set of operations provides probabilistic selection among a finite number of possibilities with arbitr ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
We show the equivalence of several different axiomatizations of the notion of (abstract) probabilistic domain in the category of dcpo's and continuous functions. The axiomatization with the richest set of operations provides probabilistic selection among a finite number of possibilities with arbitrary probabilities, whereas the poorest one has binary choice with equal probabilities as the only operation. The remaining theories lie in between; one of them is the theory of binary choice by Graham [1]. 1 Introduction A probabilistic programming language could contain different kinds of language constructs to express probabilistic choice. In a rather poor language, there might be a construct x \Phi y, whose semantics is a choice between the two possibilities x and y with equal probabilities 1=2. The `possibilities' x and y can be statements in an imperative language or expressions in a functional language. A quite rich language could contain a construct [p 1 : x 1 ; : : : ; p n : x n ],...
A term model for CCS
 9 th Symposium on Mathematical Foundations of Computer Science
, 1980
"... In a series of papers [Hen2, Mill, Mi147] Milner and his colleagues have studied a model of parallelism in which concurrent systems communicate by sending and receiving values along lines. Communication is synchronised in that the exchange of values takes place only when the sender and receiver are ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
In a series of papers [Hen2, Mill, Mi147] Milner and his colleagues have studied a model of parallelism in which concurrent systems communicate by sending and receiving values along lines. Communication is synchronised in that the exchange of values takes place only when the sender and receiver are both ready, and the exchange
Erratic Fudgets: A Semantic Theory for an Embedded Coordination Language
 SCIENCE OF COMPUTER PROGRAMMING
, 2003
"... The powerful abstraction mechanisms of functional programming languages provide the means to develop domainspecific programming languages within the language itself. Typically, this is realised by designing a set of combinators (higherorder reusable programs) for an application area, and by constr ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
The powerful abstraction mechanisms of functional programming languages provide the means to develop domainspecific programming languages within the language itself. Typically, this is realised by designing a set of combinators (higherorder reusable programs) for an application area, and by constructing individual applications by combining and coordinating individual combinators. This paper is concerned with a successful example of such an embedded programming language, namely Fudgets, a library of combinators for building graphical user interfaces in the lazy functional language Haskell. The Fudget library has been used to build a number of substantial applications, including a web browser and a proof editor interface to a proof checker for constructive type theory. This paper develops a semantic theory for the nondeterministic stream processors that are at the heart of the Fudget concept. The interaction of two features of stream processors makes the development of such a semantic theory problematic: (i) the sharing of computation provided by the lazy evaluation mechanism of the underlying host language, and (ii) the addition of nondeterministic choice needed to handle the natural concurrency that reactive applications entail We demonstrate that this combination of features in a higherorder functional language can be tamed to provide a tractable semantic theory and induction principles suitable for reasoning about contextual equivalence of Fudgets.
Abstract Diagnosis of Functional Programs
 LOGIC BASED PROGRAM SYNTHESIS AND TRANSFORMATION – 12TH INTERNATIONAL WORKSHOP, LOPSTR 2002, REVISED SELECTED PAPERS, VOLUME 2664 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2002
"... We present a generic scheme for the declarative debugging of functional programs modeled as term rewriting systems. We associate to our programs a semantics based on a (continuous) immediate consequence operator, T R, which models the (values/normal forms) semantics of R. Then, we develop an effec ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
We present a generic scheme for the declarative debugging of functional programs modeled as term rewriting systems. We associate to our programs a semantics based on a (continuous) immediate consequence operator, T R, which models the (values/normal forms) semantics of R. Then, we develop an effective debugging methodology which is based on abstract interpretation: by approximating the intended specification of the semantics of R we derive a finitely terminating bottomup diagnosis method, which can be used statically. Our debugging framework does not require the user to either provide error symptoms in advance or answer questions concerning program correctness. We have made available a prototypical implementation in Haskell and have tested it on some non trivial examples.
Semantics of disjunctive programs with monotone aggregates  an operatorbased approach
 In: NMR
, 2004
"... All major semantics of normal logic programs and normal logic programs with aggregates can be described as fixpoints of the onestep provability operator or of operators that can be derived from it. No such systematic operatorbased approach to semantics of disjunctive logic programs has been develo ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
All major semantics of normal logic programs and normal logic programs with aggregates can be described as fixpoints of the onestep provability operator or of operators that can be derived from it. No such systematic operatorbased approach to semantics of disjunctive logic programs has been developed so far. This paper is the first step in this direction. We formalize the concept of onestepprovability for disjunctive logic programs by means of nondeterministic operators on the lattice of interpretations. We establish characterizations of models, minimal models, supported models and stable models of disjunctive logic programs in terms of prefixpoints and fixpoints of nondeterministic immediateconsequence operators and their extensions to the fourvalued setting. We develop our results for programs in propositional language extended with monotone aggregate atoms. For the most part, our concepts, results and proof techniques are algebraic, which opens a possibility for further generalizations to the abstract algebraic setting of nondeterministic operators on complete lattices.
Power Domains and Second Order Predicates
 THEORETICAL COMPUTER SCIENCE
, 1993
"... Lower, upper, sandwich, mixed, and convex power domains are isomorphic to domains of second order predicates mapping predicates on the ground domain to logical values in a semiring. The various power domains differ in the nature of the underlying semiring logic and in logical constraints on the seco ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
Lower, upper, sandwich, mixed, and convex power domains are isomorphic to domains of second order predicates mapping predicates on the ground domain to logical values in a semiring. The various power domains differ in the nature of the underlying semiring logic and in logical constraints on the second order predicates.
The Calculus of Refinements, a Formal Specification Model Based on Inclusions
, 1994
"... Programming in the large require the use of formal specification languages for describing program requirements and a method to test (automatically) such requirements. These methods can also be applied in other areas like complex system modeling. In this thesis we study the theoretical kernel of a fo ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Programming in the large require the use of formal specification languages for describing program requirements and a method to test (automatically) such requirements. These methods can also be applied in other areas like complex system modeling. In this thesis we study the theoretical kernel of a formal specification language, named {\em Calculus of Refinements (COR)}, based on the use of monotonic inclusion relations. These relations are more general than equality relations, therefore inclusion specifications can be considered as a generalization of equational specifications. Moreover, we propose the substitution of the typing relation ``:'' by an inclusion relation, therefore, the Calculus of Refinements can also be considered as a new typing discipline. The theoretical study of the Calculus of Refinements consists of the definition of a denotational semantics and of an operational semantics for it. They are described on the two first parts of the thesis. In the third part we approach the specification of nondeterministic programs by means of inclusions. In the first part of the thesis we describe the Calculus of refinements as a logic, giving its syntax, a set of inference rules and defining a class of models based on the class of environment models of the $\lambda$calculus. We also study a concrete model where expressions are interpreted as order ideals. Such ideal domains have been used to give semantics to polymorphic types. On it we base the view of the Calculus of Refinements as a typing discipline. In the second part we give an operational semantics based on rewrite techniques. We define a pair of rewriting systems, namely a {\em birewriting system}, which implement the deduction on inclusion theories. The main idea is using one of the relations to rewrite terms into smaller terms, and the other one to rewrite terms into bigger terms. Using a birewriting system is possible to implement an algorithm to test if an inclusion $a \sub b$ is deducible in a theory. We rewrite $a$ into bigger terms, and $b$ into smaller terms till we obtain a common term. We have studied such technique for firstorder theories and linear secondorder theories (where bindings bind one and only one variable occurrence). In the third part, we propose the use of birewriting systems for the verification of nondeterministic program specifications. We model nondeterministic computation by means of a relation satisfying, among others, the inclusion axioms. Therefore, the rewriting technique is sound (although not necessarily complete). We prove that adding more axioms to the specification such technique is also complete.
Version Stamps  Decentralized Version Vectors
 Proc. of the 22nd International Conference on Distributed Computing Systems
, 2002
"... Version vectors and their variants play a central role in update tracking in optimistic distributed systems. Existing mechanisms for a variable number of participants use a mapping from identities to integers, and rely on some form of global configuration or distributed naming protocol to assign uni ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Version vectors and their variants play a central role in update tracking in optimistic distributed systems. Existing mechanisms for a variable number of participants use a mapping from identities to integers, and rely on some form of global configuration or distributed naming protocol to assign unique identifiers to each participant. These approaches are incompatible with replica creation under arbitrary partitions, a typical mode of operation in mobile or poorly connected environments. We present an update tracking mechanism that overcomes this limitation; it departs from the traditional mapping and avoids the use of integer counters, while providing all the functionality of version vectors in what concerns version tracking.
Towards a theory of parallel algorithms on concrete data structures
 In Semantics for Concurrency, Leicester
, 1990
"... The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DARPA or the U.S. government. Building on Kahn and Plotkin’s theory of concrete data structures and sequential functions, ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DARPA or the U.S. government. Building on Kahn and Plotkin’s theory of concrete data structures and sequential functions, Berry and Curien defined an intensional model of sequential algorithms between concrete data structures. In this paper we report on an attempt to develop a similar intensional model of concurrent computation. We present a notion of parallel algorithm between concrete data structures, together with suitable application and currying operations. We define an intensional strictness ordering on parallel algorithms, with respect to which application is well behaved (at first order types). We define the inputoutput function computed by a parallel algorithm, and we show that every parallel algorithm computes a continuous function. Thus, a parallel algorithm may be viewed as a continuous function together with a parallel computation strategy. In contrast, a Berry
A domain equation for refinement of partial systems
 UNDER CONSIDERATION FOR PUBLICATION IN MATH. STRUCT. IN COMP. SCIENC
"... ..."