Results 11  20
of
32
Hybrid Type Checking
"... Traditional static type systems are effective for verifying basic interface specifications. Dynamicallychecked contracts support more precise specifications, but these are not checked until run time, resulting in incomplete detection of defects. Hybrid type checking is a synthesis of these two appro ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Traditional static type systems are effective for verifying basic interface specifications. Dynamicallychecked contracts support more precise specifications, but these are not checked until run time, resulting in incomplete detection of defects. Hybrid type checking is a synthesis of these two approaches that enforces precise interface specifications, via static analysis where possible, but also via dynamic checks where necessary. This paper explores the key ideas and implications of hybrid type checking, in the context of the λcalculus extended with contract types, i.e., with dependent function types and with arbitrary refinements of base types.
The Impact of seq on Free TheoremsBased Program Transformations
 Fundamenta Informaticae
, 2006
"... Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for free. Unfortunately, standard parametricity results — including socalled free theorems — fail for nonstrict ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Parametric polymorphism constrains the behavior of pure functional programs in a way that allows the derivation of interesting theorems about them solely from their types, i.e., virtually for free. Unfortunately, standard parametricity results — including socalled free theorems — fail for nonstrict languages supporting a polymorphic strict evaluation primitive such as Haskell’s seq. A folk theorem maintains that such results hold for a subset of Haskell corresponding to a GirardReynolds calculus with fixpoints and algebraic datatypes even when seq is present provided the relations which appear in their derivations are required to be bottomreflecting and admissible. In this paper we show that this folklore is incorrect, but that parametricity results can be recovered in the presence of seq by restricting attention to leftclosed, total, and admissible relations instead. The key novelty of our approach is the asymmetry introduced by leftclosedness, which leads to “inequational” versions of standard parametricity results together with preconditions guaranteeing their validity even when seq is present. We use these results to derive criteria ensuring that both equational and inequational versions of short cut fusion and related program transformations based on free theorems hold in the presence of seq.
Normal Forms and CutFree Proofs as Natural Transformations
 in : Logic From Computer Science, Mathematical Science Research Institute Publications 21
, 1992
"... What equations can we guarantee that simple functional programs must satisfy, irrespective of their obvious defining equations? Equivalently, what nontrivial identifications must hold between lambda terms, thoughtof as encoding appropriate natural deduction proofs ? We show that the usual syntax g ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
What equations can we guarantee that simple functional programs must satisfy, irrespective of their obvious defining equations? Equivalently, what nontrivial identifications must hold between lambda terms, thoughtof as encoding appropriate natural deduction proofs ? We show that the usual syntax guarantees that certain naturality equations from category theory are necessarily provable. At the same time, our categorical approach addresses an equational meaning of cutelimination and asymmetrical interpretations of cutfree proofs. This viewpoint is connected to Reynolds' relational interpretation of parametricity ([27], [2]), and to the KellyLambekMac LaneMints approach to coherence problems in category theory. 1 Introduction In the past several years, there has been renewed interest and research into the interconnections of proof theory, typed lambda calculus (as a functional programming paradigm) and category theory. Some of these connections can be surprisingly subtle. Here we a...
A Characterization Of Lambda Definability In Categorical Models Of Implicit Polymorphism
 Theoretical Computer Science
, 1995
"... . Lambda definability is characterized in categorical models of simply typed lambda calculus with type variables. A categorytheoretic framework known as glueing or sconing is used to extend the JungTiuryn characterization of lambda definability [JuT93], first to ccc models, and then to categor ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
. Lambda definability is characterized in categorical models of simply typed lambda calculus with type variables. A categorytheoretic framework known as glueing or sconing is used to extend the JungTiuryn characterization of lambda definability [JuT93], first to ccc models, and then to categorical models of the calculus with type variables. Logical relations are now a wellestablished tool for studying the semantics of various typed lambda calculi. The main lines of research are focused in two areas, the first of which strives for an understanding of Strachey's notion of parametric polymorphism. The main idea is that a parametricly polymorphic function acts independently from the types to which its type variables are instantiated, and that this uniformity may be captured by imposing a relational structure on the types [OHT93, MSd93, MaR91, Wad89, Rey83, Str67]. The other line of research concerns lambda definability and the full abstraction problem for various models of languag...
Quantified interference: information theory and information flow
 Presented at Workshop on Issues in the Theory of Security (WITS’04
, 2004
"... Abstract. The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate a specific quantitative definition of information flow. We prove results relating equivalence relations, interference of program variables, independence of random variables and the flow of confidential information. Finally, we show how, in our setting, Shannon’s Perfect Secrecy theorem provides a sufficient condition to determine whether a program leaks confidential information. 1
Practical Foundations for Programming Languages
 In Dynamic Languages Symposium (DLS
, 2007
"... Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The sou ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design—the absence of illdefined programs— follows naturally. The purpose of this book is to explain this remark. A variety of programming language features are analyzed in the unifying framework of type theory. A language feature is defined by its statics, the rules governing the use of the feature in a program, and its dynamics, the rules defining how programs using this feature are to be executed. The concept of safety emerges as the coherence of the statics and the dynamics of a language. In this way we establish a foundation for the study of programming languages. But why these particular methods? Though it would require a book in itself to substantiate this assertion, the typetheoretic approach
Managing Structural Information by HigherOrder Colored Unification
 JOURNAL OF AUTOMATED REASONING
, 1999
"... Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic dierences of terms to guide the proof search. Annotations (colors) to symbol occurrences in terms are used to maintain this information. This technique has several advantages, e.g. it is highly go ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic dierences of terms to guide the proof search. Annotations (colors) to symbol occurrences in terms are used to maintain this information. This technique has several advantages, e.g. it is highly goal oriented and involves little search. In this paper we give a general formalization of coloring terms in a higherorder setting. We introduce a simplytyped calculus with color annotations and present appropriate algorithms for the general, pre and pattern unification problems. Our work is a formal basis to the implementation of rippling in a higherorder setting which is required e.g. in case of middleout reasoning. Another application is in the construction of natural language semantics, where the color annotations rule out linguistically invalid readings that are possible using standard higherorder unification.
A Cartesian Closed Category of Parallel Algorithms between Scott Domains
, 1991
"... We present a categorytheoretic framework for providing intensional semantics of programming languages and establishing connections between semantics given at different levels of intensional detail. We use a comonad to model an abstract notion of computation, and we obtain an intensional category fr ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We present a categorytheoretic framework for providing intensional semantics of programming languages and establishing connections between semantics given at different levels of intensional detail. We use a comonad to model an abstract notion of computation, and we obtain an intensional category from an extensional category by the coKleisli construction; thus, while an extensional morphism can be viewed as a function from values to values, an intensional morphism is akin to a function from computations to values. We state a simple categorytheoretic result about cartesian closure. We then explore the particular example obtained by taking the extensional category to be Cont, the category of Scott domains with continuous functions as morphisms, with a computation represented as a nondecreasing sequence of values. We refer to morphisms in the resulting intensional category as algorithms. We show that the category Alg of Scott domains with algorithms as morphisms is cartesian closed. We...
On the power of abstract interpretation
 Computer Languages
, 1992
"... Abstract Increasingly sophisticated applications of static analysis make it important to precisely characterize the power of static analysis techniques. Sekar et al. recently studied the power of strictness analysis techniques and showed that strictness analysis is perfect up to variations in consta ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract Increasingly sophisticated applications of static analysis make it important to precisely characterize the power of static analysis techniques. Sekar et al. recently studied the power of strictness analysis techniques and showed that strictness analysis is perfect up to variations in constants. We generalize this approach to abstract interpretation in general by defining a notion of similarity semantics. This semantics associates to a program a collection of interpretations all of which are obtained by blurring the distinctions that a particular static analysis ignores. We define completeness with respect to similarity semantics and obtain two completeness results. For firstorder languages, abstract interpretation is complete with respect to a standard similarity semantics provided the base abstract domain is linearly ordered. For typed higherorder languages, it is complete with respect a logical similarity semantics again under the condition of linearly ordered base abstract domain. 1