Results 1  10
of
25
Semantic foundations of concurrent constraint programming
, 1990
"... Concurrent constraint programming [Sar89,SR90] is a simple and powerful model of concurrent computation based on the notions of storeasconstraint and process as information transducer. The storeasvaluation conception of von Neumann computing is replaced by the notion that the store is a constr ..."
Abstract

Cited by 259 (26 self)
 Add to MetaCart
Concurrent constraint programming [Sar89,SR90] is a simple and powerful model of concurrent computation based on the notions of storeasconstraint and process as information transducer. The storeasvaluation conception of von Neumann computing is replaced by the notion that the store is a constraint (a finite representation of a possibly infinite set of valuations) which provides partial information about the possible values that variables can take. Instead of “reading” and “writing ” the values of variables, processes may now ask (check if a constraint is entailed by the store) and tell (augment the store with a new constraint). This is a very general paradigm which subsumes (among others) nondeterminate dataflow and the (concurrent) (constraint) logic programming languages. This paper develops the basic ideas involved in giving a coherent semantic account of these languages. Our first contribution is to give a simple and general formulation of the notion that a constraint system is a system of partial information (a la the information systems of Scott). Parameter passing and hiding is handled by borrowing ideas from the cylindric algebras of Henkin, Monk and Tarski to introduce diagonal elements and “cylindrification ” operations (which mimic the projection of information induced by existential quantifiers). The se;ond contribution is to introduce the notion of determinate concurrent constraint programming languages. The combinators treated are ask, tell, parallel composition, hiding and recursion. We present a simple model for this language based on the specificationoriented methodology of [OH86]. The crucial insight is to focus on observing the resting points of a process—those stores in which the process quiesces without producing more information. It turns out that for the determinate language, the set of resting points of a process completely characterizes its behavior on all inputs, since each process can be identified with a closure operator over the underlying constraint system. Very natural definitions of parallel composition, communication and hiding are given. For example, the parallel composition of two agents can be characterized by just the intersection of the sets of constraints associated with them. We also give a complete axiomatization of equality in this model, present
Categorical Logic
 A CHAPTER IN THE FORTHCOMING VOLUME VI OF HANDBOOK OF LOGIC IN COMPUTER SCIENCE
, 1995
"... ..."
Presheaf Models for Concurrency
, 1999
"... In this dissertation we investigate presheaf models for concurrent computation. Our aim is to provide a systematic treatment of bisimulation for a wide range of concurrent process calculi. Bisimilarity is defined abstractly in terms of open maps as in the work of Joyal, Nielsen and Winskel. Their wo ..."
Abstract

Cited by 45 (19 self)
 Add to MetaCart
In this dissertation we investigate presheaf models for concurrent computation. Our aim is to provide a systematic treatment of bisimulation for a wide range of concurrent process calculi. Bisimilarity is defined abstractly in terms of open maps as in the work of Joyal, Nielsen and Winskel. Their work inspired this thesis by suggesting that presheaf categories could provide abstract models for concurrency with a builtin notion of bisimulation. We show how
A computerchecked verification of Milner's scheduler
 Proceedings of the 2 nd International Symposium on Theoretical Aspects of Computer Software, TACS '94
, 1994
"... We present an equational verification of Milner's scheduler, which we checked by computer. To our knowledge this is the first time that the scheduler is proofchecked for a general number n of scheduled processes. 1991 Mathematics Subject Classification: 68Q60, 68T15. 1991 CR Categories: F.3.1. K ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
We present an equational verification of Milner's scheduler, which we checked by computer. To our knowledge this is the first time that the scheduler is proofchecked for a general number n of scheduled processes. 1991 Mathematics Subject Classification: 68Q60, 68T15. 1991 CR Categories: F.3.1. Keywords & Phrases: Coq, micro CRL, Milner's Scheduler, proof checking, type theory. Other versions: This report is a more detailed version of [16], brought out at the University of Utrecht. An extended abstract will appear in the LNCS Proceedings of TACS'94 (International Symposium on Theoretical Aspects of Computer Software, Japan, April 1994). Support: The work of the first author took place in the context of EC Basic Research Action 7166 concur 2. The work of the second author is supported by the Netherlands Computer Science Research Foundation (SION) with financial support of the Netherlands Organisation for Scientific Research (NWO). 1
Proof nets and the complexity of processing centerembedded constructions
 Journal of Logic, Language and Information
, 1998
"... Abstract. This paper shows how proof nets can be used to formalize the notion of “incomplete dependency ” used in psycholinguistic theories of the unacceptability of centerembedded constructions. Such theories of human language processing can usually be restated in terms of geometrical constraints ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Abstract. This paper shows how proof nets can be used to formalize the notion of “incomplete dependency ” used in psycholinguistic theories of the unacceptability of centerembedded constructions. Such theories of human language processing can usually be restated in terms of geometrical constraints on proof nets. The paper ends with a discussion of the relationship between these constraints and incremental semantic interpretation. 1.
Relating ResourceBased Semantics to Categorial Semantics
, 1997
"... This paper shows that a significant fragment of the glue approach can be reformulated to separate out the meaning composition in a way that is very similar to that of the categorial approaches. Specifically, we show the following: ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper shows that a significant fragment of the glue approach can be reformulated to separate out the meaning composition in a way that is very similar to that of the categorial approaches. Specifically, we show the following:
Type Analysis and Data Structure Selection
, 1991
"... Schwartz et al. described an optimization to implement builtin abstract types such as sets and maps with efficient data structures. Their transformation rests on the discovery of finite universal sets, called bases, to be used for avoiding data replication and for creating aggregate data structures ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Schwartz et al. described an optimization to implement builtin abstract types such as sets and maps with efficient data structures. Their transformation rests on the discovery of finite universal sets, called bases, to be used for avoiding data replication and for creating aggregate data structures that implement associative access by simpler cursor or pointer access. The SETL implementation used global analysis similar to classical dataflow for typings and for set inclusion and membership relationships to determine bases. However, the optimized data structures selected by this optmization did not include a primitive linked list or array, and all optimized data structures retained some degree of hashing. Hence, this heuristic approach did not guarantee a uniform improvement in performance over the use of default representations. The analysis was complicated by SETL's imperative style, weak typing, and low level control structures. The implemented optimizer was large (about 20,000 line...
A modular typechecking algorithm for type theory with singleton types and proof irrelevance
 IN TLCA’09, VOLUME 5608 OF LNCS
, 2009
"... ..."
Strong Normalisation Proofs for Cut Elimination in Gentzen's Sequent Calculi
, 1996
"... We define a variant LKsp of the Gentzen sequent calculus LK. In LKsp weakenings or contractions can be done in parallel. This modification allows us to interpret a symmetrical system of mix elimination rules ELKsp by a finite rewriting system; the termination of this rewriting system can be checked ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We define a variant LKsp of the Gentzen sequent calculus LK. In LKsp weakenings or contractions can be done in parallel. This modification allows us to interpret a symmetrical system of mix elimination rules ELKsp by a finite rewriting system; the termination of this rewriting system can be checked by machines. We give also a selfcontained strong normalisation proof by structural induction. We give another strong normalisation proof by a strictly monotone subrecursive interpretation; this interpretation gives subrecursive bounds for the length of derivations. We give a strong normalisation proof by applying orthogonal term rewriting results for a confluent restriction of the mix elimination system ELKsp .