Results 11  20
of
104
DeviceEnabled Authorization in the Grey System
 In Proceedings of the 8th Information Security Conference (ISC’05
, 2005
"... We describe the design and implementation of Grey, a set of software extensions that convert an offtheshelf smartphoneclass device into a tool by which its owner exercises and delegates her authority to both physical and virtual resources. We describe the software architecture and user interfaces ..."
Abstract

Cited by 64 (15 self)
 Add to MetaCart
(Show Context)
We describe the design and implementation of Grey, a set of software extensions that convert an offtheshelf smartphoneclass device into a tool by which its owner exercises and delegates her authority to both physical and virtual resources. We describe the software architecture and user interfaces of Grey, and then detail two initial case studies in which we have converted infrastructure to accommodate requests from Greyenabled devices. The first is two floors (nearly 30,000 square feet) of office space, in which we are equipping over 65 doors for access control using Grey for a population of roughly 150 persons. The second is modifications to Windows XP that permit login via Greyenabled phones. We provide preliminary evaluations of these efforts and directions for research to further the vision of a unified authorization framework for both physical and virtual resources.
Some lambda calculus and type theory formalized
 Journal of Automated Reasoning
, 1999
"... Abstract. We survey a substantial body of knowledge about lambda calculus and Pure Type Systems, formally developed in a constructive type theory using the LEGO proof system. On lambda calculus, we work up to an abstract, simplified, proof of standardization for beta reduction, that does not mention ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We survey a substantial body of knowledge about lambda calculus and Pure Type Systems, formally developed in a constructive type theory using the LEGO proof system. On lambda calculus, we work up to an abstract, simplified, proof of standardization for beta reduction, that does not mention redex positions or residuals. Then we outline the meta theory of Pure Type Systems, leading to the strengthening lemma. One novelty is our use of named variables for the formalization. Along the way we point out what we feel has been learned about general issues of formalizing mathematics, emphasizing the search for formal definitions that are convenient for formal proof and convincingly represent the intended informal concepts.
Pure type systems formalized
 Proceedings of the International Conference on Typed Lambda Calculi and Applications
, 1993
"... ..."
A Coverage Checking Algorithm for LF
, 2003
"... Coverage checking is the problem of deciding whether any closed term of a given type is an instance of at least one of a given set of patterns. It can be used to verify if a function defined by pattern matching covers all possible cases. This problem has a straightforward solution for the first ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
Coverage checking is the problem of deciding whether any closed term of a given type is an instance of at least one of a given set of patterns. It can be used to verify if a function defined by pattern matching covers all possible cases. This problem has a straightforward solution for the firstorder, simplytyped case, but is in general undecidable in the presence of dependent types. In this paper we present a terminating algorithm for verifying coverage of higherorder, dependently typed patterns.
Compiler Verification in LF
 Seventh Annual IEEE Symposium on Logic in Computer Science
, 1992
"... We sketch a methodology for the verification of compiler correctness based on the LF Logical Framework as realized within the Elf programming language. We have applied this technique to specify, implement, and verify a compiler from a simple functional programming language to a variant of the Catego ..."
Abstract

Cited by 41 (11 self)
 Add to MetaCart
(Show Context)
We sketch a methodology for the verification of compiler correctness based on the LF Logical Framework as realized within the Elf programming language. We have applied this technique to specify, implement, and verify a compiler from a simple functional programming language to a variant of the Categorical Abstract Machine (CAM). 1 Introduction Compiler correctness is an essential aspect of program verification as almost all programs are compiled before being executed. Unfortunately, even for small languages and simple compilers, proving their correctness can be an enormous task, and verifying these proofs becomes an equally difficult task. Our goal is to develop techniques for mechanizing proofs of compiler correctness. To this end we employ 1. the LF Logical Framework [13] to specify relationships between source and target languages; 2. the Elf programming language [21] to provide an operational semantics for these relationships; and 3. a related metatheory [22] to reason about the ...
Deciding Type Equivalence in a Language with Singleton Kinds
 In TwentySeventh ACM Symposium on Principles of Programming Languages
, 2000
"... Work on the TILT compiler for Standard ML led us to study a language with singleton kinds: S(A) is the kind of all types provably equivalent to the type A. Singletons are interesting because they provide a very general form of definitions for type variables, allow finegrained control of type comput ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
Work on the TILT compiler for Standard ML led us to study a language with singleton kinds: S(A) is the kind of all types provably equivalent to the type A. Singletons are interesting because they provide a very general form of definitions for type variables, allow finegrained control of type computations, and allow many equational constraints to be expressed within the type system.
Automated Theorem Proving in a Simple MetaLogic for LF
 PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON AUTOMATED DEDUCTION (CADE15
, 1998
"... Higherorder representation techniques allow elegant encodings of logics and programming languages in the logical framework LF, but unfortunately they are fundamentally incompatible with induction principles needed to reason about them. In this paper we develop a metalogic M_2 which allows i ..."
Abstract

Cited by 37 (16 self)
 Add to MetaCart
Higherorder representation techniques allow elegant encodings of logics and programming languages in the logical framework LF, but unfortunately they are fundamentally incompatible with induction principles needed to reason about them. In this paper we develop a metalogic M_2 which allows inductive reasoning over LF encodings, and describe its implementation in Twelf, a specialpurpose automated theorem prover for properties of logics and programming languages. We have used Twelf to automatically prove a number of nontrivial theorems, including type preservation for MiniML and the deduction theorem for intuitionistic propositional logic.
Extensional equivalence and singleton types
 ACM Transactions on Computational Logic
"... We study the λΠΣS ≤ calculus, which contains singleton types S(M) classifying terms of base type provably equivalent to the term M. The system includes dependent types for pairs and functions (Σ and Π) and a subtyping relation induced by regarding singletons as subtypes of the base type. The decidab ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
We study the λΠΣS ≤ calculus, which contains singleton types S(M) classifying terms of base type provably equivalent to the term M. The system includes dependent types for pairs and functions (Σ and Π) and a subtyping relation induced by regarding singletons as subtypes of the base type. The decidability of type checking for this language is nonobvious, since to type check we must be able to determine equivalence of wellformed terms. But in the presence of singleton types, the provability of an equivalence judgment Γ ⊢ M1 ≡ M2: A can depend both on the typing context Γ and on the particular type A at which M1 and M2 are compared. We show how to prove decidability of term equivalence, hence of type checking, in λΠΣS ≤ by exhibiting a typedirected algorithm for directly computing normal forms. The correctness of normalization is shown using an unusual variant of Kripke logical relations organized around sets; rather than defining a logical equivalence relation, we work directly with (subsets of) the corresponding equivalence classes. We then provide a more efficient algorithm for checking type equivalence without constructing normal forms. We also show that type checking, subtyping, and all other judgments of the system are decidable.
A Trustworthy Proof Checker
 IN ILIANO CERVESATO, EDITOR, WORKSHOP ON THE FOUNDATIONS OF COMPUTER SECURITY
, 2002
"... ProofCarrying Code (PCC) and other applications in computer security require machinecheckable proofs of properties of machinelanguage programs. The main advantage of the PCC approach is that the amount of code that must be explicitly trusted is very small: it consists of the logic in which predic ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
ProofCarrying Code (PCC) and other applications in computer security require machinecheckable proofs of properties of machinelanguage programs. The main advantage of the PCC approach is that the amount of code that must be explicitly trusted is very small: it consists of the logic in which predicates and proofs are expressed, the safety predicate, and the proof checker. We have built a minimal proof checker, and we explain its design principles, and the representation issues of the logic, safety predicate, and safety proofs. We show that the trusted computing base (TCB) in such a system can indeed be very small. In our current system the TCB is less than 2,700 lines of code (an order of magnitude smaller even than other PCC systems) which adds to our confidence of its correctness.