Results 1  10
of
14
Practical Foundations for Programming Languages
 In Dynamic Languages Symposium (DLS
, 2007
"... Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The sou ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design—the absence of illdefined programs— follows naturally. The purpose of this book is to explain this remark. A variety of programming language features are analyzed in the unifying framework of type theory. A language feature is defined by its statics, the rules governing the use of the feature in a program, and its dynamics, the rules defining how programs using this feature are to be executed. The concept of safety emerges as the coherence of the statics and the dynamics of a language. In this way we establish a foundation for the study of programming languages. But why these particular methods? Though it would require a book in itself to substantiate this assertion, the typetheoretic approach
Dependently Typed Programming with DomainSpecific Logics
 SUBMITTED TO POPL ’09
, 2008
"... We define a dependent programming language in which programmers can define and compute with domainspecific logics, such as an accesscontrol logic that statically prevents unauthorized access to controlled resources. Our language permits programmers to define logics using the LF logical framework, ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
We define a dependent programming language in which programmers can define and compute with domainspecific logics, such as an accesscontrol logic that statically prevents unauthorized access to controlled resources. Our language permits programmers to define logics using the LF logical framework, whose notion of binding and scope facilitates the representation of the consequence relation of a logic, and to compute with logics by writing functional programs over LF terms. These functional programs can be used to compute values at runtime, and also to compute types at compiletime. In previous work, we studied a simplytyped framework for representing and computing with variable binding [LICS 2008]. In this paper, we generalize our previous type theory to account for dependently typed inference rules, which are necessary to adequately represent domainspecific logics, and we present examples of using our type theory for certified software and mechanized metatheory.
Focalisation and classical realisability
 In Computer Science Logic ’09, LNCS
, 2009
"... Abstract We develop a polarised variant of Curien and Herbelin’s λ̄µµ ̃ calculus suitable for sequent calculi that admit a focalising cut elimination (i.e. whose proofs are focalised when cutfree), such as Girard’s classical logic LC or linear logic. This gives a setting in which Krivine’s classic ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract We develop a polarised variant of Curien and Herbelin’s λ̄µµ ̃ calculus suitable for sequent calculi that admit a focalising cut elimination (i.e. whose proofs are focalised when cutfree), such as Girard’s classical logic LC or linear logic. This gives a setting in which Krivine’s classical realisability extends naturally (in particular to callbyvalue), with a presentation in terms of orthogonality. We give examples of applications to the theory of programming languages. In this version extended with appendices, we in particular give the twosided formulation of classical logic with the involutive classical negation. We also show that there is, in classical realisability, a notion of internal complete
Positively Dependent Types
 SUBMITTED TO PLPV ’09
, 2008
"... This paper is part of a line of work on using the logical techniques of polarity and focusing to design a dependent programming language, with particular emphasis on programming with deductive systems such as programming languages and proof theories. Polarity emphasizes the distinction between posit ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper is part of a line of work on using the logical techniques of polarity and focusing to design a dependent programming language, with particular emphasis on programming with deductive systems such as programming languages and proof theories. Polarity emphasizes the distinction between positive types, which classify data, and negative types, which classify computation. In previous work, we showed how to use Zeilberger’s higherorder formulation of focusing to integrate a positive function space for representing variable binding, an essential tool for specifying logical systems, with a standard negative computational function space. However, our previous work considers only a simplytyped language. The central technical contribution of the present paper is to extend higherorder focusing with a form of dependency that we call positively dependent types: We allow dependency on positive data, but not negative computation, and we present the syntax of dependent pair and function types using an iterated inductive definition, mapping positive data to types, which gives an account of typelevel computation. We construct our language inside the dependently typed programming language Agda 2, making essential use of coinductive types and inductionrecursion.
Refinement types and computational duality
 In: ACM SIGPLANSIGACT Workshop on Programming Languages Meets Program Verification
, 2009
"... One lesson learned painfully over the past twenty years is the perilous interaction of Currystyle typing with evaluation order and sideeffects. This led eventually to the value restriction on polymorphism in ML, as well as, more recently, to similar artifacts in type systems for ML with intersecti ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
One lesson learned painfully over the past twenty years is the perilous interaction of Currystyle typing with evaluation order and sideeffects. This led eventually to the value restriction on polymorphism in ML, as well as, more recently, to similar artifacts in type systems for ML with intersection and union refinement types. For example, some of the traditional subtyping laws for unions and intersections are unsound in the presence of effects, while unionelimination requires an evaluation context restriction in addition to the value restriction on intersectionintroduction. Our aim is to show that rather than being ad hoc artifacts, phenomena such as the value and evaluation context restrictions arise naturally in type systems for effectful languages, out of principles of duality. Beginning with a review of recent work on the CurryHoward interpretation of focusing proofs as patternmatching programs,
Structural focalization
, 2011
"... Focusing, introduced by JeanMarc Andreoli in the context of classical linear logic, defines a normal form for sequent calculus derivations that cuts down on the number of possible derivations by eagerly applying invertible rules and grouping sequences of noninvertible rules. A focused sequent calc ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Focusing, introduced by JeanMarc Andreoli in the context of classical linear logic, defines a normal form for sequent calculus derivations that cuts down on the number of possible derivations by eagerly applying invertible rules and grouping sequences of noninvertible rules. A focused sequent calculus is defined relative to some nonfocused sequent calculus; focalization is the property that every nonfocused derivation can be transformed into a focused derivation. In this paper, we present a focused sequent calculus for polarized propositional intuitionistic logic and prove the focalization property relative to a standard presentation of propositional intuitionistic logic. Compared to existing approaches, the proof is quite concise, depending only on the internal soundness and completeness of the focused logic. In turn, both of these properties can be established (and mechanically verified) by structural induction in the style of Pfenning’s structural cut elimination without the need for any tedious and repetitious invertibility lemmas. The proof of cut admissibility for the focused system, which establishes internal soundness, is not particularly novel. The proof of identity expansion, which establishes internal completeness, is the principal contribution of this work. 1
Defunctionalizing Focusing Proofs (Or, How Twelf Learned To Stop Worrying And Love The Ωrule)
"... Abstract. In previous work, the author gave a higherorder analysis of focusing proofs (in the sense of Andreoli’s search strategy), with a role for infinitary rules very similar in structure to Buchholz’s Ωrule. Among other benefits, this “patternbased ” description of focusing simplifies the cut ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In previous work, the author gave a higherorder analysis of focusing proofs (in the sense of Andreoli’s search strategy), with a role for infinitary rules very similar in structure to Buchholz’s Ωrule. Among other benefits, this “patternbased ” description of focusing simplifies the cutelimination procedure, allowing cuts to be eliminated in a connectivegeneric way. However, interpreted literally, it is problematic as a representation technique for proofs, because of the difficulty of inspecting and/or exhaustively searching over these infinite objects. In the spirit of infinitary proof theory, this paper explores a view of patternbased focusing proofs as façons de parler, describing how to compile them down to firstorder derivations through defunctionalization, Reynolds ’ program transformation. Our main result is a representation of patternbased focusing in the Twelf logical framework, whose core type theory is too weak to directly encode infinitary rules—although this weakness directly enables socalled “higherorder abstract syntax ” encodings. By applying the systematic defunctionalization transform, not only do we retain the benefits of the higherorder focusing analysis, but we can also take advantage of HOAS within Twelf, ultimately arriving at a proof representation with surprisingly little bureaucracy. 1
and suggestions for improvement. I also thank the following people for
"... 3.0 United States License. To view a copy of this license, visit ..."
Commons AttributionNoncommercialNo Derivative Works
"... 3.0 United States License. To view a copy of this license, visit ..."