Results 1  10
of
105
Automated Soundness Proofs for Dataflow Analyses and Transformations Via Local Rules
 In Proc. of the 32nd Symposium on Principles of Programming Languages
, 2005
"... We present Rhodium, a new language for writing compiler optimizations that can be automatically proved sound. Unlike our previous work on Cobalt, Rhodium expresses optimizations using explicit dataflow facts manipulated by local propagation and transformation rules. This new style allows Rhodium opt ..."
Abstract

Cited by 78 (8 self)
 Add to MetaCart
(Show Context)
We present Rhodium, a new language for writing compiler optimizations that can be automatically proved sound. Unlike our previous work on Cobalt, Rhodium expresses optimizations using explicit dataflow facts manipulated by local propagation and transformation rules. This new style allows Rhodium optimizations to be mutually recursively defined, to be automatically composed, to be interpreted in both flowsensitive andinsensitive ways, and to be applied interprocedurally given a separate contextsensitivity strategy, all while retaining soundness. Rhodium also supports infinite analysis domains while guaranteeing termination of analysis. We have implemented a soundness checker for Rhodium and have specified and automatically proven the soundness of all of Cobalt’s optimizations plus a variety of optimizations not expressible in Cobalt, including Andersen’s pointsto analysis, arithmeticinvariant detection, loopinductionvariable strength reduction, and redundant array load elimination. Categories and Subject Descriptors: D.2.4 [Software
Expressive declassification policies and modular static enforcement
 IEEE Symposium on Security and Privacy
, 2008
"... ..."
(Show Context)
Relational separation logic
, 2007
"... In this paper, we present a Hoarestyle logic for specifying and verifying how two pointer programs are related. Our logic lifts the main features of separation logic, from an assertion to a relation, and from a property about a single program to a relationship between two programs. We show the stre ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we present a Hoarestyle logic for specifying and verifying how two pointer programs are related. Our logic lifts the main features of separation logic, from an assertion to a relation, and from a property about a single program to a relationship between two programs. We show the strength of the logic, by proving that the SchorrWaite graph marking algorithm is equivalent to the depthfirst traversal.
Translating Dependency into Parametricity
 In: ACM International Conference on Functional Programming
"... Abadi et al. introduced the dependency core calculus (DCC) as a unifying framework to study many important program analyses such as binding time, information flow, slicing, and function call tracking. DCC uses a lattice of monads and a nonstandard typing rule for their associated bind operations to ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
(Show Context)
Abadi et al. introduced the dependency core calculus (DCC) as a unifying framework to study many important program analyses such as binding time, information flow, slicing, and function call tracking. DCC uses a lattice of monads and a nonstandard typing rule for their associated bind operations to describe the dependency of computations in a program. Abadi et al. proved a noninterference theorem that establishes the correctness of DCC’s type system and thus the correctness of the type systems for the analyses above. In this paper, we study the relationship between DCC and the GirardReynolds polymorphic lambda calculus (System F). We encode the recursionfree fragment of DCC into F via a typedirected translation. Our main theoretical result is that, following from the correctness of the translation, the parametricity theorem for F implies the noninterference theorem for DCC. In addition, the translation provides insights into DCC’s type system and suggests implementation strategies of dependency calculi in polymorphic languages.
Automatic Numeric Abstractions for HeapManipulating Programs
, 2010
"... We present a logic for relating heapmanipulating programs to numeric abstractions. These numeric abstractions are expressed as simple imperative programs over integer variables and have the property that termination and safety of the numeric program ensures termination and safety of the original, h ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
We present a logic for relating heapmanipulating programs to numeric abstractions. These numeric abstractions are expressed as simple imperative programs over integer variables and have the property that termination and safety of the numeric program ensures termination and safety of the original, heapmanipulating program. We have implemented an automated version of this abstraction process and present experimental results for programs involving a variety of data structures.
Information Flow Monitor Inlining
, 2010
"... In recent years it has been shown that dynamic monitoring can be used to soundly enforce information flow policies. For programs distributed in source or bytecode form, the use of JIT compilation makes it difficult to implement monitoring by modifying the language runtime system. An inliner avoids t ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
In recent years it has been shown that dynamic monitoring can be used to soundly enforce information flow policies. For programs distributed in source or bytecode form, the use of JIT compilation makes it difficult to implement monitoring by modifying the language runtime system. An inliner avoids this problem and also serves to provide monitoring for more than one runtime. We show how to inline an information flow monitor, specifically a flow sensitive one previously proved to enforce termination insensitive noninterference. We prove that the inlined version is observationally equivalent to the original. 1.
Verifying Quantitative Reliability for Programs That Execute on Unreliable Hardware
"... Emerging highperformance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approx ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
Emerging highperformance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. We present Rely, a programming language that enables developers to reason about the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification that characterizes the reliability of the underlying hardware components and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented
Proving optimizations correct using parameterized program equivalence
 In Proceedings of the 2009 Conference on Programming Language Design and Implementation (PLDI 2009
, 2009
"... Translation validation is a technique for checking that, after an optimization has run, the input and output of the optimization are equivalent. Traditionally, translation validation has been used to prove concrete, fully specified programs equivalent. In this paper we present Parameterized Equivale ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
Translation validation is a technique for checking that, after an optimization has run, the input and output of the optimization are equivalent. Traditionally, translation validation has been used to prove concrete, fully specified programs equivalent. In this paper we present Parameterized Equivalence Checking (PEC), a generalization of translation validation that can prove the equivalence of parameterized programs. A parameterized program is a partially specified program that can represent multiple concrete programs. For example, a parameterized program may contain a section of code whose only known property is that it does not modify certain variables. By proving parameterized programs equivalent, PEC can prove the correctness of transformation rules that represent complex optimizations once and for all, before they are ever run. We implemented our PEC technique in a tool that can establish the equivalence of two parameterized programs. To highlight the power of PEC, we designed a language for implementing complex optimizations using manytomany rewrite rules, and used this language to implement a variety of optimizations including software pipelining, loop unrolling, loop unswitching, loop interchange, and loop fusion. Finally, to demonstrate the effectiveness of PEC, we used our PEC implementation to verify that all the optimizations we implemented in our language preserve program behavior.
A Relational Modal Logic for HigherOrder Stateful ADTs
"... The method of logical relations is a classic technique for proving the equivalence of higherorder programs that implement the same observable behavior but employ different internal data representations. Although it was originally studied for pure, strongly normalizing languages like System F, it ha ..."
Abstract

Cited by 22 (12 self)
 Add to MetaCart
(Show Context)
The method of logical relations is a classic technique for proving the equivalence of higherorder programs that implement the same observable behavior but employ different internal data representations. Although it was originally studied for pure, strongly normalizing languages like System F, it has been extended over the past two decades to reason about increasingly realistic languages. In particular, Appel and McAllester’s idea of stepindexing has been used recently to develop syntactic Kripke logical relations for MLlike languages that mix functional and imperative forms of data abstraction. However, while stepindexed models are powerful tools, reasoning with them directly is quite painful, as one is forced to engage in tedious stepindex arithmetic to derive even simple results. In this paper, we propose a logic LADR for equational reasoning about higherorder programs in the presence of existential type abstraction, general recursive types, and higherorder mutable state. LADR exhibits a novel synthesis of features from PlotkinAbadi logic, GödelLöb logic, S4 modal logic, and relational separation logic. Our model of LADR is based on Ahmed, Dreyer, and Rossberg’s stateoftheart stepindexed Kripke logical relation, which was designed to facilitate proofs of representation independence for “statedependent ” ADTs. LADR enables one to express such proofs at a much higher level, without counting steps or reasoning about the subtle, stepstratified construction of possible worlds.
Verification of information flow and access control policies with dependent types
, 2011
"... We present Relational Hoare Type Theory (RHTT), a novel language and verification system capable of expressing and verifying rich information flow and access control policies via dependent types. We show that a number of security policies which have been formalized separately in the literature can a ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We present Relational Hoare Type Theory (RHTT), a novel language and verification system capable of expressing and verifying rich information flow and access control policies via dependent types. We show that a number of security policies which have been formalized separately in the literature can all be expressed in RHTT using only standard typetheoretic types, abstract predicates, and modules. Example security policies include conditional declassification, information erasure, and statedependent information flow and access control. RHTT can reason about such policies in the presence of dynamic memory allocation, deallocation, pointer aliasing and arithmetic. The system, theorems and examples have all been formalized in Coq.