Results 1  10
of
128
Domain Theory
 Handbook of Logic in Computer Science
, 1994
"... Least fixpoints as meanings of recursive definitions. ..."
Abstract

Cited by 461 (20 self)
 Add to MetaCart
Least fixpoints as meanings of recursive definitions.
Domain Theory in Logical Form
 Annals of Pure and Applied Logic
, 1991
"... The mathematical framework of Stone duality is used to synthesize a number of hitherto separate developments in Theoretical Computer Science: • Domain Theory, the mathematical theory of computation introduced by Scott as a foundation for denotational semantics. • The theory of concurrency and system ..."
Abstract

Cited by 231 (10 self)
 Add to MetaCart
The mathematical framework of Stone duality is used to synthesize a number of hitherto separate developments in Theoretical Computer Science: • Domain Theory, the mathematical theory of computation introduced by Scott as a foundation for denotational semantics. • The theory of concurrency and systems behaviour developed by Milner, Hennessy et al. based on operational semantics. • Logics of programs. Stone duality provides a junction between semantics (spaces of points = denotations of computational processes) and logics (lattices of properties of processes). Moreover, the underlying logic is geometric, which can be computationally interpreted as the logic of observable properties—i.e. properties which can be determined to hold of a process on the basis of a finite amount of information about its execution. These ideas lead to the following programme:
Proving Properties of Security Protocols by Induction
 In 10th IEEE Computer Security Foundations Workshop
, 1997
"... Informal justifications of security protocols involve arguing backwards that various events are impossible. Inductive definitions can make such arguments rigorous. The resulting proofs are complicated, but can be generated reasonably quickly using the proof tool Isabelle/HOL. There is no restriction ..."
Abstract

Cited by 150 (7 self)
 Add to MetaCart
Informal justifications of security protocols involve arguing backwards that various events are impossible. Inductive definitions can make such arguments rigorous. The resulting proofs are complicated, but can be generated reasonably quickly using the proof tool Isabelle/HOL. There is no restriction to finitestate systems and the approach is not based on belief logics. Protocols are inductively defined as sets of traces, which may involve many interleaved protocol runs. Protocol descriptions model accidental key losses as well as attacks. The model spy can send spoof messages made up of components decrypted from previous traffic. Several key distribution protocols have been studied, including NeedhamSchroeder, Yahalom and OtwayRees. The method applies to both symmetrickey and publickey protocols. A new attack has been discovered in a variant of OtwayRees (already broken by Mao and Boyd). Assertions concerning secrecy and authenticity have been proved. CONTENTS i Contents 1 Intro...
Operational Semantics and Polymorphic Type Inference
, 1988
"... Three languages with polymorphic type disciplines are discussed, namely the *calculus with Milner's polymorphic type discipline; a language with imperative features (polymorphic references); and a skeletal module language with structures, signatures and functors. In each of the two first cases we ..."
Abstract

Cited by 92 (2 self)
 Add to MetaCart
Three languages with polymorphic type disciplines are discussed, namely the *calculus with Milner's polymorphic type discipline; a language with imperative features (polymorphic references); and a skeletal module language with structures, signatures and functors. In each of the two first cases we show that the type inference system is consistent with an operational dynamic semantics. On the module level, polymorphic types correspond to signatures. There is a notion of principal signature. Socalled signature checking is the module level equivalent of type checking. In particular, there exists an algorithm which either fails or produces a principal signature.
The expressive powers of logic programming semantics
 Abstract in Proc. PODS 90
, 1995
"... We study the expressive powers of two semantics for deductive databases and logic programming: the wellfounded semantics and the stable semantics. We compare them especially to two older semantics, the twovalued and threevalued program completion semantics. We identify the expressive power of the ..."
Abstract

Cited by 86 (5 self)
 Add to MetaCart
We study the expressive powers of two semantics for deductive databases and logic programming: the wellfounded semantics and the stable semantics. We compare them especially to two older semantics, the twovalued and threevalued program completion semantics. We identify the expressive power of the stable semantics, and in fairly general circumstances that of the wellfounded semantics. In particular, over infinite Herbrand universes, the four semantics all have the same expressive power. We discuss a feature of certain logic programming semantics, which we call the Principle of Stratification, a feature allowing a program to be built easily in modules. The threevalued program completion and wellfounded semantics satisfy this principle. Over infinite Herbrand models, we consider a notion of translatability between the threevalued program completion and wellfounded semantics which is in a sense uniform in the strata. In this sense of uniform translatability we show the wellfounded semantics to be more expressive than the threevalued program completion. The proof is a corollary of our result that over nonHerbrand infinite models, the wellfounded semantics is more expressive than the threevalued program completion semantics. 1
Infinite Objects in Type Theory
"... . We show that infinite objects can be constructively understood without the consideration of partial elements, or greatest fixedpoints, through the explicit consideration of proof objects. We present then a proof system based on these explanations. According to this analysis, the proof expressions ..."
Abstract

Cited by 83 (2 self)
 Add to MetaCart
. We show that infinite objects can be constructively understood without the consideration of partial elements, or greatest fixedpoints, through the explicit consideration of proof objects. We present then a proof system based on these explanations. According to this analysis, the proof expressions should have the same structure as the program expressions of a pure functional lazy language: variable, constructor, application, abstraction, case expressions, and local let expressions. 1 Introduction The usual explanation of infinite objects relies on the use of greatest fixedpoints of monotone operators, whose existence is justified by the impredicative proof of Tarski's fixed point theorem. The proof theory of such infinite objects, based on the so called coinduction principle, originally due to David Park [21] and explained with this name for instance in the paper [18], reflects this explanation. Constructively, to rely on such impredicative methods is somewhat unsatisfactory (see fo...
Inductive Sets and Families in MartinLöf's Type Theory and Their SetTheoretic Semantics
 Logical Frameworks
, 1991
"... MartinLof's type theory is presented in several steps. The kernel is a dependently typed calculus. Then there are schemata for inductive sets and families of sets and for primitive recursive functions and families of functions. Finally, there are set formers (generic polymorphism) and universes. ..."
Abstract

Cited by 76 (13 self)
 Add to MetaCart
MartinLof's type theory is presented in several steps. The kernel is a dependently typed calculus. Then there are schemata for inductive sets and families of sets and for primitive recursive functions and families of functions. Finally, there are set formers (generic polymorphism) and universes. At each step syntax, inference rules, and settheoretic semantics are given. 1 Introduction Usually MartinLof's type theory is presented as a closed system with rules for a finite collection of set formers. But it is also often pointed out that the system is in principle open to extension: we may introduce new sets when there is a need for them. The principle is that a set is by definition inductively generated  it is defined by its introduction rules, which are rules for generating its elements. The elimination rule is determined by the introduction rules and expresses definition by primitive recursion on the way the elements of the set are generated. (In this paper I shall use the term ...
A General Formulation of Simultaneous InductiveRecursive Definitions in Type Theory
 Journal of Symbolic Logic
, 1998
"... The first example of a simultaneous inductiverecursive definition in intuitionistic type theory is MartinLöf's universe à la Tarski. A set U0 of codes for small sets is generated inductively at the same time as a function T0 , which maps a code to the corresponding small set, is defined by recursi ..."
Abstract

Cited by 65 (10 self)
 Add to MetaCart
The first example of a simultaneous inductiverecursive definition in intuitionistic type theory is MartinLöf's universe à la Tarski. A set U0 of codes for small sets is generated inductively at the same time as a function T0 , which maps a code to the corresponding small set, is defined by recursion on the way the elements of U0 are generated. In this paper we argue that there is an underlying general notion of simultaneous inductiverecursive definition which is implicit in MartinLöf's intuitionistic type theory. We extend previously given schematic formulations of inductive definitions in type theory to encompass a general notion of simultaneous inductionrecursion. This enables us to give a unified treatment of several interesting constructions including various universe constructions by Palmgren, Griffor, Rathjen, and Setzer and a constructive version of Aczel's Frege structures. Consistency of a restricted version of the extension is shown by constructing a realisability model ...
Program Derivation by Fixed Point Computation
, 1988
"... This paper develops a transformational paradigm by which nonnumerical algorithms are treated as fixed point computations derived from very high level problem specifications. We begin by presenting an abstract functional + problem specification language SQ , which is shown to express any partial re ..."
Abstract

Cited by 59 (10 self)
 Add to MetaCart
This paper develops a transformational paradigm by which nonnumerical algorithms are treated as fixed point computations derived from very high level problem specifications. We begin by presenting an abstract functional + problem specification language SQ , which is shown to express any partial recursive function in a fixed point normal form. Next, we give a nondeterministic iterative schema that in the case of finite iteration generalizes the 'chaotic iteration' of Cousot and Cousot for computing fixed points of monotone functions efficiently. New techniques are discussed for recomputing fixed points of distributive functions efficiently. Numerous examples illustrate how these techniques for computing and recomputing fixed points can be incorporated within a transformational programming methodology to facilitate the design and verification of nonnumerical algorithms. 1. Introduction In a recent survey article [25] Martin Feather has said that the current state of the art of program...
Extending Classical Logic with Inductive Definitions
, 2000
"... The goal of this paper is to extend classical logic with a generalized notion of inductive definition supporting positive and negative induction, to investigate the properties of this logic, its relationships to other logics in the area of nonmonotonic reasoning, logic programming and deductiv ..."
Abstract

Cited by 58 (38 self)
 Add to MetaCart
The goal of this paper is to extend classical logic with a generalized notion of inductive definition supporting positive and negative induction, to investigate the properties of this logic, its relationships to other logics in the area of nonmonotonic reasoning, logic programming and deductive databases, and to show its application for knowledge representation by giving a typology of definitional knowledge.