Results 1  10
of
23
Transforming out Timing Leaks
 IN PROC. 27TH ACM SYMP. ON PRINCIPLES OF PROGRAMMING LANGUAGES (POPL
, 2000
"... One aspect of security in mobile code is privacy: private (or secret) data should not be leaked to unauthorised agents. Most of the work on secure information flow has until recently only been concerned with detecting direct and indirect flows. Secret information can however be leaked to the att ..."
Abstract

Cited by 155 (2 self)
 Add to MetaCart
One aspect of security in mobile code is privacy: private (or secret) data should not be leaked to unauthorised agents. Most of the work on secure information flow has until recently only been concerned with detecting direct and indirect flows. Secret information can however be leaked to the attacker also through covert channels. It is very reasonable to assume that the attacker, even as an external observer, can monitor the timing (including termination) behaviour of the program. Thus to claim a program secure, the security analysis must take also these into account. In this work we present a surprisingly simple solution to the problem of detecting timing leakages to external observers. Our system consists of a type system in which welltyped programs do not leak secret information directly, indirectly or through timing, and a transformation for removing timing leakages. For any program that is well typed according to Volpano and Smith [VS97a], our transformation generates a program that is also free of timing leaks.
Recursive Subtyping Revealed
 Journal of Functional Programming
, 2000
"... Algorithms for checking subtyping between recursive types lie at the core of many programming language implementations. But the fundamental theory of these algorithms and how they relate to simpler declarative specifications is not widely understood, due in part to the difficulty of the available in ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
Algorithms for checking subtyping between recursive types lie at the core of many programming language implementations. But the fundamental theory of these algorithms and how they relate to simpler declarative specifications is not widely understood, due in part to the difficulty of the available introductions to the area. This tutorial paper offers an "endtoend" introduction to recursive types and subtyping algorithms, from basic theory to efficient implementation, set in the unifying mathematical framework of coinduction. 1. INTRODUCTION Recursively defined types in programming languages and lambdacalculi come in two distinct varieties. Consider, for example, the type X described by the equation X = Nat!(Nat\ThetaX): An element of X is a function that maps a number to a pair consisting of a number and a function of the same form. This type is often written more concisely as X.Nat!(Nat\ThetaX). A variety of familiar recursive types such as lists and trees can be defined analogou...
A callbyneed lambdacalculus with locally bottomavoiding choice: Context lemma and correctness of transformations
 MATHEMATICAL STRUCTURES IN COMPUTER SCIENCE
, 2008
"... We present a higherorder callbyneed lambda calculus enriched with constructors, caseexpressions, recursive letrecexpressions, a seqoperator for sequential evaluation and a nondeterministic operator amb that is locally bottomavoiding. We use a smallstep operational semantics in form of a sin ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
We present a higherorder callbyneed lambda calculus enriched with constructors, caseexpressions, recursive letrecexpressions, a seqoperator for sequential evaluation and a nondeterministic operator amb that is locally bottomavoiding. We use a smallstep operational semantics in form of a singlestep rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may as well as mustconvergence, which is appropriate for nondeterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with socalled complete sets of commuting and forking diagrams we show that
all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <= c is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. mayconvergence.
A Sound Metalogical Semantics for Input/Output Effects
, 1994
"... . We study the longstanding problem of semantics for input /output (I/O) expressed using sideeffects. Our vehicle is a small higherorder imperative language, with operations for interactive character I/O and based on ML syntax. Unlike previous theories, we present both operational and denotational ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
. We study the longstanding problem of semantics for input /output (I/O) expressed using sideeffects. Our vehicle is a small higherorder imperative language, with operations for interactive character I/O and based on ML syntax. Unlike previous theories, we present both operational and denotational semantics for I/O effects. We use a novel labelled transition system that uniformly expresses both applicative and imperative computation. We make a standard definition of bisimilarity and prove it is a congruence using Howe's method. Next, we define a metalogical type theory M in which we may give a denotational semantics to O. M generalises Crole and Pitts' FIXlogic by adding in a parameterised recursive datatype, which is used to model I/O. M comes equipped both with judgements of equality of expressions, and an operational semantics; M itself is given a domaintheoretic semantics in the category CPPO of cppos (bottompointed posets with joins of !chains) and Scott continuous functions...
On the Safety of Nöcker’s Strictness Analysis
 FRANKFURT AM MAIN, GERMANY
"... Abstract. This paper proves correctness of Nöcker’s method of strictness analysis, implemented for Clean, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctne ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Abstract. This paper proves correctness of Nöcker’s method of strictness analysis, implemented for Clean, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nöcker’s strictness analysis. We reformulate Nöcker’s strictness analysis algorithm in a higherorder lambdacalculus with case, constructors, letrec, and a nondeterministic choice operator ⊕ used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a smallstep semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams.
A Model for Comparing the Space Usage of Lazy Evaluators
 In Proceedings of the 2nd International ACM SIGPLAN Conference on Principles and Practice of Declarative Programming
, 2000
"... Identifying the source of space faults in functional programs is hard. The problem is compounded as space usage can vary enormously from one implementation to another. We use a termgraph rewriting model to describe evaluators with explicit space usage. Given descriptions for two evaluators E1 and E ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Identifying the source of space faults in functional programs is hard. The problem is compounded as space usage can vary enormously from one implementation to another. We use a termgraph rewriting model to describe evaluators with explicit space usage. Given descriptions for two evaluators E1 and E2, if E1 never has asymptotically worse space usage than E2, we can use a bisimulationlike proof method to prove it. Conversely, if E1 is leakier than E2, we characterise a class of computations that expose the difference between them.
Unfolding abstract datatypes
 In MPC ’08: Proceedings of the 9th international conference on Mathematics of Program Construction
, 2008
"... Abstract. We argue that abstract datatypes — with public interfaces hiding private implementations — represent a form of codata rather than ordinary data, and hence that proof methods for corecursive programs are the appropriate techniques to use for reasoning with them. In particular, we show that ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. We argue that abstract datatypes — with public interfaces hiding private implementations — represent a form of codata rather than ordinary data, and hence that proof methods for corecursive programs are the appropriate techniques to use for reasoning with them. In particular, we show that the universal properties of unfold operators are perfectly suited for the task. We illustrate with the solution to a problem in the recent literature. 1
Action Semantics Reasoning About Functional Programs
 Mathematical Structures in Computer Science
, 1996
"... syntax The algebraic definition of abstract syntax trees below can, more or less, be read as a BNF grammar. Emphatic brackets, [[: : : ]], indicate nodes in an abstract syntax tree. grammar: ffl Expression = Identifier "true" "false" [[ "" Identifier "." Expression ]] [[ Expression Expression ]] ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
syntax The algebraic definition of abstract syntax trees below can, more or less, be read as a BNF grammar. Emphatic brackets, [[: : : ]], indicate nodes in an abstract syntax tree. grammar: ffl Expression = Identifier "true" "false" [[ "" Identifier "." Expression ]] [[ Expression Expression ]] [[ "rec" Identifier "." Expression ]] [[ "if" Expression "then" Expression "else" Expression ]] . Action semantics reasoning about functional programs 3 ffl Identifier = [[ letter + ]] . 2.2. Semantic functions Action semantic descriptions are syntaxdirected in the denotational style: compositional semantic functions map abstract syntax into meaning and are defined inductively by semantic equations. There is one universal semantic domain, namely action, the sort of actions. Actions are expressed in a notation that looks a little like informal English prose but, in fact, it is a completely formal combinatorbased notation. The verbose notation should be suggestive of the meaning of th...
Basic Action Theory
 BRICS Report Series
, 1995
"... Action semantics is a semantic description framework with very good pragmatic properties but until now a rather weak theory for reasoning about programs. A strong action theory would have a great practical potential, as it would facilitate reasoning about the large class of programming languages ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Action semantics is a semantic description framework with very good pragmatic properties but until now a rather weak theory for reasoning about programs. A strong action theory would have a great practical potential, as it would facilitate reasoning about the large class of programming languages that can be described in action semantics.
Open Maps as a Bridge Between Algebraic Observational Equivalence and Bisimilarity
, 1997
"... There are two widely accepted notions of behavioural equivalence, formalizing the idea of observational indistinguishability: observational equivalence for algebras (which are models for sequential systems) and bisimulation equivalence (bisimilarity) for concurrent processes. In this paper we show t ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
There are two widely accepted notions of behavioural equivalence, formalizing the idea of observational indistinguishability: observational equivalence for algebras (which are models for sequential systems) and bisimulation equivalence (bisimilarity) for concurrent processes. In this paper we show that the observational equivalences for standard, partial and regular algebras are bisimulation equivalences. This is done in the setting of open maps, proposed in [JNW93] as an abstract approach to behavioural equivalences of processes. The main advantage of the results is capturing the models for sequential and concurrent systems in a uniform framework. In such an abstract setting we formulate the property of determinism, shared by all the algebras considered in this paper, and identify some interesting facts about bisimilarity in the deterministic case. All the results for standard, regular and partial algebras are obtained by the applications of a general machinery developed in the pape...