Results 1 -
8 of
8
How to make ad hoc proof automation less ad hoc
- In ICFP
, 2011
"... Most interactive theorem provers provide support for some form of user-customizable proof automation. In a number of popular systems, such as Coq and Isabelle, this automation is achieved primarily through tactics, which are programmed in a separate language from that of the prover’s base logic. Whi ..."
Abstract
-
Cited by 28 (7 self)
- Add to MetaCart
(Show Context)
Most interactive theorem provers provide support for some form of user-customizable proof automation. In a number of popular systems, such as Coq and Isabelle, this automation is achieved primarily through tactics, which are programmed in a separate language from that of the prover’s base logic. While tactics are clearly useful in practice, they can be difficult to maintain and compose because, unlike lemmas, their behavior cannot be specified within the expressive type system of the prover itself. We propose a novel approach to proof automation in Coq that allows the user to specify the behavior of custom automated routines in terms of Coq’s own type system. Our approach involves a sophisticated application of Coq’s canonical structures, which generalize Haskell type classes and facilitate a flexible style of dependentlytyped logic programming. Specifically, just as Haskell type classes are used to infer the canonical implementation of an overloaded term at a given type, canonical structures can be used to infer the canonical proof of an overloaded lemma for a given instantiation of its parameters. We present a series of design patterns for canonical structure programming that enable one to carefully and predictably coax Coq’s type inference engine into triggering the execution of user-supplied algorithms during unification, and we illustrate these patterns through several realistic examples drawn from Hoare Type Theory. We assume no prior knowledge of Coq and describe the relevant aspects of Coq type inference from first principles.
Refinement Types For Haskell
"... SMT-based checking of refinement types for call-by-value lan-guages is a well-studied subject. Unfortunately, the classical trans-lation of refinement types to verification conditions is unsound un-der lazy evaluation. When checking an expression, such systems implicitly assume that all the free var ..."
Abstract
-
Cited by 9 (5 self)
- Add to MetaCart
(Show Context)
SMT-based checking of refinement types for call-by-value lan-guages is a well-studied subject. Unfortunately, the classical trans-lation of refinement types to verification conditions is unsound un-der lazy evaluation. When checking an expression, such systems implicitly assume that all the free variables in the expression are bound to values. This property is trivially guaranteed by eager, but does not hold under lazy, evaluation. Thus, to be sound and precise, a refinement type system for Haskell and the corresponding verifi-cation conditions must take into account which subset of binders actually reduces to values. We present a stratified type system that labels binders as potentially diverging or not, and that (circularly) uses refinement types to verify the labeling. We have implemented our system in LIQUIDHASKELL and present an experimental eval-uation of our approach on more than 10,000 lines of widely used Haskell libraries. We show that LIQUIDHASKELL is able to prove 96 % of all recursive functions terminating, while requiring a mod-est 1.7 lines of termination-annotations per 100 lines of code. 1.
Dependent Interoperability
"... In this paper we study the problem of interoperability—combining constructs from two separate programming languages within one program—in the case where one of the two languages is dependently typed and the other is simply typed. We present a core calculus called SD, which combines dependently- and ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
In this paper we study the problem of interoperability—combining constructs from two separate programming languages within one program—in the case where one of the two languages is dependently typed and the other is simply typed. We present a core calculus called SD, which combines dependently- and simply-typed sublanguages and supports user-defined (dependent) datatypes, among other standard features. SD has “boundary terms ” that mediate the interaction between the two sub-languages. The operational semantics of SD demonstrates how the necessary dynamic checks, which must be done when passing a value from the simply-typed world to the dependently typed world, can be extracted from the dependent type constructors themselves, modulo user-defined functions for marshaling values across the boundary. We establish type-safety and other meta-theoretic properties of SD, and contrast this approach to others in the literature. Categories and Subject Descriptors D.3.1 [Programming Languages]:
Languages, Security
"... Over the last few years, in-lined reference monitors (IRM’s) have gained much popularity as successful security enforcement mechanisms. Aspect-oriented programming (AOP) provides one elegant paradigm for implementing IRM frameworks. There is a foreseen need to enhance both AOP-style and non-AOP IRM’ ..."
Abstract
- Add to MetaCart
(Show Context)
Over the last few years, in-lined reference monitors (IRM’s) have gained much popularity as successful security enforcement mechanisms. Aspect-oriented programming (AOP) provides one elegant paradigm for implementing IRM frameworks. There is a foreseen need to enhance both AOP-style and non-AOP IRM’s with static certification due to two main concerns. Firstly, the Trusted Computing Base (TCB) can grow large quickly in an AOP-style IRM framework. Secondly, in many practical settings, such as in the domain of web-security, aspectually encoded policy implementations and the rewriters that apply them to untrusted code are subject to frequent change. Replacing the rewriter with a small, light-weight, yet powerful certifier that is policy-independent and less subject to change addresses both these concerns. The goal of this paper is two-fold. First, interesting issues encountered in the process of building certification systems for IRM frameworks, such as policy specification, certifier soundness, and certifier completeness, are explored in the light of related work. In the second half of the paper, three prominent unsolved problems in the domain of IRM certification are examined: runtime codegeneration via eval, IRM certification in the presence of concurrency, and formal verification of transparency. Promising directions suggested by recent work related to these problems are highlighted.
From Safety To Termination And Back: SMT-Based Verification For Lazy Languages
"... ar ..."
(Show Context)
unknown title
"... My research focuses on the design of statically-typed programming languages. Static type systems are a popular, cost-effective form of lightweight program verification. They provide a tractable and modular way for programmers to express properties that can be mechanically checked by the compiler. As ..."
Abstract
- Add to MetaCart
(Show Context)
My research focuses on the design of statically-typed programming languages. Static type systems are a popular, cost-effective form of lightweight program verification. They provide a tractable and modular way for programmers to express properties that can be mechanically checked by the compiler. As a result, the compiler can rule out a wide variety of errors and provide more information to refactoring and development tools. For example, systems written with type-safe languages cannot be compromised by buffer overruns if all array accesses are statically proven safe. Furthermore, programmers can modify their code with the assurance that they have not violated critical safety properties. I explore these designs in the context of functional programming languages, such as Haskell and ML. Functional programming languages are an ideal context for type system research; they excel in their capabilities for static reasoning. However, there is need for improvement. Some programming idioms must be ruled out simply because they cannot be shown to be sound by existing type systems. To overcome these limitations, my work investigates type system features in the context of both new languages and existing ones, and evaluates those designs with respect to both theory and practice. Trellys: Dependently-typed language design Dependent types promise to dramatically increase the effectiveness of static type systems. They work by allowing types to depend on program values, enabling specifications that are both more flexible and more precise. However, even though dependent type theory has been well studied as a foundation for logical reasoning, these type systems have been little used in practical programming languages.