Results 1  10
of
17
A Core Calculus of Dependency
 IN PROC. 26TH ACM SYMP. ON PRINCIPLES OF PROGRAMMING LANGUAGES (POPL
, 1999
"... Notions of program dependency arise in many settings: security, partial evaluation, program slicing, and calltracking. We argue that there is a central notion of dependency common to these settings that can be captured within a single calculus, the Dependency Core Calculus (DCC), a small extension ..."
Abstract

Cited by 228 (25 self)
 Add to MetaCart
Notions of program dependency arise in many settings: security, partial evaluation, program slicing, and calltracking. We argue that there is a central notion of dependency common to these settings that can be captured within a single calculus, the Dependency Core Calculus (DCC), a small extension of Moggi's computational lambda calculus. To establish this thesis, we translate typed calculi for secure information flow, bindingtime analysis, slicing, and calltracking into DCC. The translations help clarify aspects of the source calculi. We also define a semantic model for DCC and use it to give simple proofs of noninterference results for each case.
Continuations: A Mathematical Semantics for Handling Full Jumps
, 1974
"... Abstract. This paper describes a method of giving the mathematical semantics of programming languages which include the most general form of jumps. 1. ..."
Abstract

Cited by 110 (0 self)
 Add to MetaCart
Abstract. This paper describes a method of giving the mathematical semantics of programming languages which include the most general form of jumps. 1.
Continuation Semantics for Prolog with Cut
, 1989
"... We present a denotational continuation semantics for Prolog with cut. First a uniform language B is studied, which captures the control flow aspects of Prolog. The denotational semantics for B is proven equivalent to a transition system based operational semantics. The congruence proof relies on the ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
We present a denotational continuation semantics for Prolog with cut. First a uniform language B is studied, which captures the control flow aspects of Prolog. The denotational semantics for B is proven equivalent to a transition system based operational semantics. The congruence proof relies on the representation of the operational semantics as a chain of approximations and on a convenient induction principle. Finally, we interpret the abstract language B such that we obtain equivalent denotational and operational models for Prolog itself. Section 1 Introduction In the nice textbook of Lloyd [Ll] the cut, available in all Prologsystems, is described as a controversial control facility. The cut, added to the Horn clause logic for efficiency reasons, affects the completeness of the refutation procedure. Therefore the standard declarative semantics using Herbrand models does not adequately capture the computational aspects of the Prologlanguage. In the present paper we study the Prolog...
The complexity of type inference for higherorder typed lambda calculi
 In. Proc. 18th ACM Symposium on the Principles of Programming Languages
, 1991
"... We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
We analyse the computational complexity of type inference for untyped X,terms in the secondorder polymorphic typed Xcalculus (F2) invented by Girard and Reynolds, as well as higherorder extensions F3,F4,...,/ ^ proposed by Girard. We prove that recognising the i^typable terms requires exponential time, and for Fa the problem is nonelementary. We show as well a sequence of lower bounds on recognising the i^typable terms, where the bound for Fk+1 is exponentially larger than that for Fk. The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Nonaccepting computations are mapped to nonnormalising reduction sequences, and hence nontypable terms. The accepting computations are mapped to typable terms, where higherorder types encode reduction sequences, and firstorder types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges. These results are the first nontrivial lower bounds on type inference for the Girard/Reynolds
Abstract Models of Storage
, 2000
"... This note is a historical survey of Christopher Strachey's influence on the development of semantic models of assignment and storage management in procedural languages. ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This note is a historical survey of Christopher Strachey's influence on the development of semantic models of assignment and storage management in procedural languages.
Three paradigms of computer science
 Minds and Machines
, 2007
"... Abstract. We examine the philosophical disputes among computer scientists concerning methodological, ontological, and epistemological questions: Is computer science a branch of mathematics, an engineering discipline, or a natural science? Should knowledge about the behaviour of programs proceed dedu ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. We examine the philosophical disputes among computer scientists concerning methodological, ontological, and epistemological questions: Is computer science a branch of mathematics, an engineering discipline, or a natural science? Should knowledge about the behaviour of programs proceed deductively or empirically? Are computer programs on a par with mathematical objects, with mere data, or with mental processes? We conclude that distinct positions taken in regard to these questions emanate from distinct sets of received beliefs or paradigms within the discipline: — The rationalist paradigm, which was common among theoretical computer scientists, defines computer science as a branch of mathematics, treats programs on a par with mathematical objects, and seeks certain, a priori knowledge about their ‘correctness ’ by means of deductive reasoning. — The technocratic paradigm, promulgated mainly by software engineers, defines computer science as an engineering discipline, treats programs as mere data, and seeks probable, a posteriori knowledge about their reliability empirically using testing suites. — The scientific paradigm, prevalent in the branches of artificial intelligence, defines computer science as a natural (empirical) science, takes programs to be entities on a par with mental processes, and seeks a priori and a posteriori knowledge about them by combining formal deduction and scientific experimentation. We demonstrate evidence corroborating the tenets of the scientific paradigm, in particular the claim that programprocesses are on a par with mental processes. We conclude with a discussion in the influence that the technocratic paradigm has been having over computer science.
Linearlyused state in models of callbyvalue
"... Abstract. We investigate the phenomenon that every monad is a linear state monad. We do this by studying a fullycomplete statepassing translation from an impure callbyvalue language to a new linear type theory: the enriched callbyvalue calculus. The results are not specific to store, but can b ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Abstract. We investigate the phenomenon that every monad is a linear state monad. We do this by studying a fullycomplete statepassing translation from an impure callbyvalue language to a new linear type theory: the enriched callbyvalue calculus. The results are not specific to store, but can be applied to any computational effect expressible using algebraic operations, even to effects that are not usually thought of as stateful. There is a bijective correspondence between generic effects in the source language and state access operations in the enriched callbyvalue calculus. From the perspective of categorical models, the enriched callbyvalue calculus suggests a refinement of the traditional Kleisli models of effectful callbyvalue languages. The new models can be understood as enriched adjunctions. 1
The theory of classification, part 13: Template classes and genericity
 in Journal of Object Technology
"... This is the thirteenth article in a regular series on objectoriented type theory for nonspecialists. Previous articles have gradually built up models of objects [1], types [2] and classes [3] in the λcalculus. Inheritance has been shown to extend both type schemes [4] and implementations [5]. The ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
This is the thirteenth article in a regular series on objectoriented type theory for nonspecialists. Previous articles have gradually built up models of objects [1], types [2] and classes [3] in the λcalculus. Inheritance has been shown to extend both type schemes [4] and implementations [5]. The most recent article [6] presented a model of a simple class
Semantics of Pointers, Referencing and Dereferencing with Intensional Logic
 in ‘Proc. 6th annual IEEE symposium on Logic in Computer Science’, IEEE Computer Society Press, Los Almolitos
, 1991
"... We apply intensional logic to the semantics of an Algollike programming language. This associates with expressions their senses, or meanings relative to "possible worlds", here interpreted as machine states. These meanings lie in the semantic domains of a higher order typed intensional logic. The g ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We apply intensional logic to the semantics of an Algollike programming language. This associates with expressions their senses, or meanings relative to "possible worlds", here interpreted as machine states. These meanings lie in the semantic domains of a higher order typed intensional logic. The great advantage of this approach is that it preserves compositionality of the meaning function, even in "opaque contexts". Earlier work in this direction, by Janssen and Van Emde Boas, dealt with the semantics of assignments to simple variables, indexed variables and pointers, without, however, considering "dereferenced" pointers on the left or right hand side of assignments. More recent work by Hung applied this approach to the semantics of blocks and procedures with parameters (passed by value, by reference and by name). The present work extends this approach to pointers, including dereferenced pointers on both sides of assignments. It is shown how this approach gives an elegant solution to...
The Analysis of Programming Structure
 ACM SIGACT News
, 1997
"... This paper has explored three examples of good semantical analyses of programming structures. The three examples share two characteristics: the semantic models are abstract enough to be applicable in many situations, and the models lead to proofs of noncomputability. Other examples of programming s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper has explored three examples of good semantical analyses of programming structures. The three examples share two characteristics: the semantic models are abstract enough to be applicable in many situations, and the models lead to proofs of noncomputability. Other examples of programming structures have been omitted from this short essay: foundations for objectoriented languages, descriptions of languages with local variables, and the theory of database query languages. Each of these examples have corresponding semantical theories that enjoy the two characteristics above. The richness of programming structure suggests a corollary: it is folly to look for one universal model to explain all programming structures. Of course, as a theoretical subject, semantics benefits from the reduction of many concepts to a primitive, common level. Nevertheless, reduction must often be resisted. We have seen how computability theory loses all kinds of relevant distinctions. Another example is the naive semantics of PCF based on dcpos: the model is not abstract enough,