Results 1  10
of
30
A Categorical Manifesto
 Mathematical Structures in Computer Science
, 1991
"... : This paper tries to explain why and how category theory is useful in computing science, by giving guidelines for applying seven basic categorical concepts: category, functor, natural transformation, limit, adjoint, colimit and comma category. Some examples, intuition, and references are given for ..."
Abstract

Cited by 109 (5 self)
 Add to MetaCart
(Show Context)
: This paper tries to explain why and how category theory is useful in computing science, by giving guidelines for applying seven basic categorical concepts: category, functor, natural transformation, limit, adjoint, colimit and comma category. Some examples, intuition, and references are given for each concept, but completeness is not attempted. Some additional categorical concepts and some suggestions for further research are also mentioned. The paper concludes with some philosophical discussion. 0 Introduction This paper tries to explain why category theory is useful in computing science. The basic answer is that computing science is a young field that is growing rapidly, is poorly organised, and needs all the help it can get, and that category theory can provide help with at least the following: ffl Formulating definitions and theories. In computing science, it is often more difficult to formulate concepts and results than to give a proof. The seven guidelines of this paper can h...
Semantics of Types for Mutable State
, 2004
"... Proofcarrying code (PCC) is a framework for mechanically verifying the safety of machine language programs. A program that is successfully verified by a PCC system is guaranteed to be safe to execute, but this safety guarantee is contingent upon the correctness of various trusted components. For in ..."
Abstract

Cited by 61 (5 self)
 Add to MetaCart
(Show Context)
Proofcarrying code (PCC) is a framework for mechanically verifying the safety of machine language programs. A program that is successfully verified by a PCC system is guaranteed to be safe to execute, but this safety guarantee is contingent upon the correctness of various trusted components. For instance, in traditional PCC systems the trusted computing base includes a large set of lowlevel typing rules. Foundational PCC systems seek to minimize the size of the trusted computing base. In particular, they eliminate the need to trust complex, lowlevel type systems by providing machinecheckable proofs of type soundness for real machine languages. In this thesis, I demonstrate the use of logical relations for proving the soundness of type systems for mutable state. Specifically, I focus on type systems that ensure the safe allocation, update, and reuse of memory. For each type in the language, I define logical relations that explain the meaning of the type in terms of the operational semantics of the language. Using this model of types, I prove each typing rule as a lemma. The major contribution is a model of System F with general references — that is, mutable cells that can hold values of any closed type including other references, functions, recursive types, and impredicative quantified types. The model is based on ideas from both possible worlds and the indexed model of Appel and McAllester. I show how the model of mutable references is encoded in higherorder logic. I also show how to construct an indexed possibleworlds model for a von Neumann machine. The latter is used in the Princeton Foundational PCC system to prove type safety for a fullfledged lowlevel typed assembly language. Finally, I present a semantic model for a region calculus that supports typeinvariant references as well as memory reuse. iii
A NestedGraph Model for the Representation and Manipulation of Complex Objects
 ACM Transactions on Information Systems
, 1994
"... this paper we report upon a graphbased approach to such an integration. Our use of graphs has two key advantages : firstly, graphs are formally defined, wellunderstood structures; secondly, it is widely accepted that graphbased formalisms considerably enhance the usability of complex systems [19] ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
(Show Context)
this paper we report upon a graphbased approach to such an integration. Our use of graphs has two key advantages : firstly, graphs are formally defined, wellunderstood structures; secondly, it is widely accepted that graphbased formalisms considerably enhance the usability of complex systems [19]. Graphs have been used in conjunction with a number of conventional data models, for example the hierarchical and network models [35], the entityrelationship model [9] and a recent extension thereof for complex objects [27], and various semantic data models [16, 20, 31]. Graphs or hypergraphs [6] have also been used more recently in [12, 17, 23, 25, 33, 36] as a data modelling tool in their own right. We give a comparison between this recent work and our own approach in Section 4 of the paper. Directed graphs have also been the foundation of Hypertext databases [11, 33]. Such databases are graphs consisting of nodes which refer to units of stored information (typically text) and of named links. Each link connects two nodes, the "source" and the "destination". Links are traversed either forwards (from source to destination) or backwards (from destination to source). The process of traversing named links and examining the text associated with nodes is called
Feature Logics
 HANDBOOK OF LOGIC AND LANGUAGE, EDITED BY VAN BENTHEM & TER MEULEN
, 1994
"... Feature logics form a class of specialized logics which have proven especially useful in classifying and constraining the linguistic objects known as feature structures. Linguistically, these structures have their origin in the work of the Prague school of linguistics, followed by the work of Chom ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
Feature logics form a class of specialized logics which have proven especially useful in classifying and constraining the linguistic objects known as feature structures. Linguistically, these structures have their origin in the work of the Prague school of linguistics, followed by the work of Chomsky and Halle in The Sound Pattern of English [16]. Feature structures have been reinvented several times by computer scientists: in the theory of data structures, where they are known as record structures, in artificial intelligence, where they are known as frame or slotvalue structures, in the theory of data bases, where they are called "complex objects", and in computati
A Stratified Semantics of General References Embeddable in HigherOrder Logic (Extended Abstract)
, 2002
"... Amal J. Ahmed Andrew W. Appel # Roberto Virga Princeton University {amal,appel,rvirga}@cs.princeton.edu Abstract We demonstrate a semantic model of general references  that is, mutable memory cells that may contain values of any (staticallychecked) closed type, including other references. Our mo ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
Amal J. Ahmed Andrew W. Appel # Roberto Virga Princeton University {amal,appel,rvirga}@cs.princeton.edu Abstract We demonstrate a semantic model of general references  that is, mutable memory cells that may contain values of any (staticallychecked) closed type, including other references. Our model is in terms of execution sequences on a von Neumann machine
Processes as formal power series: a coinductive approach to denotational semantics
 TCS
, 2006
"... We characterize must testing equivalence on CSP in terms of the unique homomorphism from the Moore automaton of CSP processes to the final Moore automaton of partial formal power series over a certain semiring. The final automaton is then turned into a CSPalgebra: operators and fixpoints are define ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We characterize must testing equivalence on CSP in terms of the unique homomorphism from the Moore automaton of CSP processes to the final Moore automaton of partial formal power series over a certain semiring. The final automaton is then turned into a CSPalgebra: operators and fixpoints are defined, respectively, via behavioural differential equations and simulation relations. This structure is then shown to be preserved by the final homomorphism. As a result, we obtain a fully abstract compositional model of CSP phrased in purely settheoretical terms.
Defining a Formal Coalgebraic Semantics for the Rosetta Specification Language
 JOURNAL OF UNIVERSAL COMPUTER SCIENCE
, 2003
"... Rosetta is a systems level design language that allows algebraic specification of systems through facets. The usual approach to formally describe a specification is to define an algebra that satisfies the specification. Although it is possible to formally describe Rosetta facets with the use of alge ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Rosetta is a systems level design language that allows algebraic specification of systems through facets. The usual approach to formally describe a specification is to define an algebra that satisfies the specification. Although it is possible to formally describe Rosetta facets with the use of algebras, we choose to use the dual of algebra, i.e. coalgebra, to do so. Coalgebras are particularly suited for describing statebased systems. This makes formally defining statebased Rosetta quite straightforward. For nonstatebased Rosetta, the formalization is not as direct, but can still be done with coalgebras by focusing on the behaviors of systems specified. We use denotational semantics to map Rosetta syntactic constructs into a language understood by the coalgebras.
Models of nonwellfounded sets via an indexed final coalgebra theorem
 J. Symbolic Logic
"... ..."
(Show Context)
A Strongly Normalising CurryHoward Correspondence for IZF Set Theory
"... Abstract. We propose a method for realising the proofs of Intuitionistic ZermeloFraenkel set theory (IZF) by strongly normalising λterms. This method relies on the introduction of a Currystyle type theory extended with specific subtyping principles, which is then used as a lowlevel language to i ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a method for realising the proofs of Intuitionistic ZermeloFraenkel set theory (IZF) by strongly normalising λterms. This method relies on the introduction of a Currystyle type theory extended with specific subtyping principles, which is then used as a lowlevel language to interpret IZF via a representation of sets as pointed graphs inspired by Aczel’s hyperset theory. As a consequence, we refine a classical result of Myhill and Friedman by showing how a strongly normalising λterm that computes a function of type N → N can be extracted from the proof of its existence in IZF. 1
Cut elimination for Zermelo’s set theory
, 2006
"... We show how to express intuitionistic Zermelo set theory in deduction modulo (i.e. by replacing its axioms by rewrite rules) in such a way that the corresponding notion of proof enjoys the normalization property. To do so, we first rephrase set theory as a theory of pointed graphs (following a para ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We show how to express intuitionistic Zermelo set theory in deduction modulo (i.e. by replacing its axioms by rewrite rules) in such a way that the corresponding notion of proof enjoys the normalization property. To do so, we first rephrase set theory as a theory of pointed graphs (following a paradigm due to P. Aczel) by interpreting settheoretic equality as bisimilarity, and show that in this setting, Zermelo’s axioms can be decomposed into graphtheoretic primitives that can be turned into rewrite rules. We then show that the theory we obtain in deduction modulo is a conservative extension of (a minor extension of) Zermelo set theory. Finally, we prove the normalization of the intuitionistic fragment of the theory.