Results 1 
9 of
9
EXHAUSTIBLE SETS IN HIGHERTYPE COMPUTATION
, 2008
"... We say that a set is exhaustible if it admits algorithmic universal quantification for continuous predicates in finite time, and searchable if there is an algorithm that, given any continuous predicate, either selects an element for which the predicate holds or else tells there is no example. The C ..."
Abstract

Cited by 13 (12 self)
 Add to MetaCart
We say that a set is exhaustible if it admits algorithmic universal quantification for continuous predicates in finite time, and searchable if there is an algorithm that, given any continuous predicate, either selects an element for which the predicate holds or else tells there is no example. The Cantor space of infinite sequences of binary digits is known to be searchable. Searchable sets are exhaustible, and we show that the converse also holds for sets of hereditarily total elements in the hierarchy of continuous functionals; moreover, a selection functional can be constructed uniformly from a quantification functional. We prove that searchable sets are closed under intersections with decidable sets, and under the formation of computable images and of finite and countably infinite products. This is related to the fact, established here, that exhaustible sets are topologically compact. We obtain a complete description of exhaustible total sets by developing a computational version of a topological Arzela–Ascoli type characterization of compact subsets of function spaces. We also show that, in the nonempty case, they are precisely the computable images of the Cantor space. The emphasis of this paper is on the theory of exhaustible and searchable sets, but we also briefly sketch applications.
Purely Functional Lazy Nondeterministic Programming
"... Functional logic programming and probabilistic programming have demonstrated the broad benefits of combining laziness (nonstrict evaluation with sharing of the results) with nondeterminism. Yet these benefits are seldom enjoyed in functional programming, because the existing features for nonstric ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Functional logic programming and probabilistic programming have demonstrated the broad benefits of combining laziness (nonstrict evaluation with sharing of the results) with nondeterminism. Yet these benefits are seldom enjoyed in functional programming, because the existing features for nonstrictness, sharing, and nondeterminism in functional languages are tricky to combine. We present a practical way to write purely functional lazy nondeterministic programs that are efficient and perspicuous. We achieve this goal by embedding the programs into existing languages (such as Haskell, SML, and OCaml) with highquality implementations, by making choices lazily and representing data with nondeterministic components, by working with custom monadic data types and search strategies, and by providing equational laws for the programmer to reason about their code.
Computational interpretations of analysis via products of selection functions
 CIE 2010, INVITED TALK ON SPECIAL SESSION “PROOF THEORY AND COMPUTATION
, 2010
"... We show that the computational interpretation of full comprehension via two wellknown functional interpretations (dialectica and modified realizability) corresponds to two closely related infinite products of selection functions. ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
We show that the computational interpretation of full comprehension via two wellknown functional interpretations (dialectica and modified realizability) corresponds to two closely related infinite products of selection functions.
Searchable Sets, DubucPenon Compactness, Omniscience Principles, and the Drinker Paradox
"... We show that a number of contenders for an abstract and general notion of compactness, applicable in particular to computability theory and constructive mathematics, coincide in some well known frameworks. We consider compactness of sets rather than of spaces, where we replace topologies by the res ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We show that a number of contenders for an abstract and general notion of compactness, applicable in particular to computability theory and constructive mathematics, coincide in some well known frameworks. We consider compactness of sets rather than of spaces, where we replace topologies by the restriction to constructive reasoning, as in the work by a number of authors, including Penon, Dubuc, Taylor and Escardó. Sets here are conceived in a very liberal way, including types of HA ω and Martin Löf type theory, and objects of toposes, among others. Some of the equivalences require instances of the axiom of choice, which are available in some of the above frameworks but not all, as is well known. We relate the instances of the axiom of choice applied in the above equivalences to the topological notion of total separatedness.
Infinite sets that satisfy the principle of omniscience in all varieties of constructive mathematics, MartinLöf formalization, in Agda notation, of part of the paper with the same title
 University of Birmingham, UK, http://www.cs.bham.ac.uk/~mhe/papers/ omniscient/AnInfiniteOmniscientSet.html, September 2011. SETS IN CONSTRUCTIVE MATHEMATICS 21
"... Abstract. We show that there are plenty of infinite sets that satisfy the omniscience principle, in a minimalistic setting for constructive mathematics that is compatible with classical mathematics. A first example of an omniscient set is the onepoint compactification of the natural numbers, also k ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. We show that there are plenty of infinite sets that satisfy the omniscience principle, in a minimalistic setting for constructive mathematics that is compatible with classical mathematics. A first example of an omniscient set is the onepoint compactification of the natural numbers, also known as the generic convergent sequence. We relate this to Grilliot’s and Ishihara’s Tricks. We generalize this example to many infinite subsets of the Cantor space. These subsets turn out to be ordinals in a constructive sense, with respect to the lexicographic order, satisfying both a wellfoundedness condition with respect to decidable subsets, and transfinite induction restricted to decidable predicates. The use of simple types allows us to reach any ordinal below ɛ0, and richer type systems allow us to get higher. §1. Introduction. We show that there are plenty of infinite sets X that satisfy the omniscience principle for every function p: X → 2, ∃x ∈ X(p(x) = 0) ∨ ∀x ∈ X(p(x) = 1). For X finite this is trivial, and for X = N, this is LPO, the limited principle of omniscience, which of course is and will remain a taboo in any variety of
niques]: functional programming
, 2010
"... This is a tutorial for mathematically inclined functional programmers, based on previously published, peered reviewed theoretical work. We discuss a highertype functional, written here in the functional programming language Haskell, which (1) optimally plays sequential games, (2) implements a compu ..."
Abstract
 Add to MetaCart
This is a tutorial for mathematically inclined functional programmers, based on previously published, peered reviewed theoretical work. We discuss a highertype functional, written here in the functional programming language Haskell, which (1) optimally plays sequential games, (2) implements a computational version of the Tychonoff Theorem from topology, and (3) realizes the DoubleNegation Shift from logic and proof theory. The functional makes sense for finite and infinite (lazy) lists, and in the binary case it amounts to an operation that is available in any (strong) monad. In fact, once we define this monad in Haskell, it turns out that this amazingly versatile functional is already available in Haskell, in the standard prelude, called sequence, which iterates this binary operation. Therefore Haskell proves that this functional is even more versatile than anticipated, as the function sequence was introduced for other purposes by the language designers, in particular the iteration of a list of monadic effects (but effects are not what we discuss here). D.1.1 [Programming tech
Defunctionalizing Focusing Proofs (Or, How Twelf Learned To Stop Worrying And Love The Ωrule)
"... Abstract. In previous work, the author gave a higherorder analysis of focusing proofs (in the sense of Andreoli’s search strategy), with a role for infinitary rules very similar in structure to Buchholz’s Ωrule. Among other benefits, this “patternbased ” description of focusing simplifies the cut ..."
Abstract
 Add to MetaCart
Abstract. In previous work, the author gave a higherorder analysis of focusing proofs (in the sense of Andreoli’s search strategy), with a role for infinitary rules very similar in structure to Buchholz’s Ωrule. Among other benefits, this “patternbased ” description of focusing simplifies the cutelimination procedure, allowing cuts to be eliminated in a connectivegeneric way. However, interpreted literally, it is problematic as a representation technique for proofs, because of the difficulty of inspecting and/or exhaustively searching over these infinite objects. In the spirit of infinitary proof theory, this paper explores a view of patternbased focusing proofs as façons de parler, describing how to compile them down to firstorder derivations through defunctionalization, Reynolds ’ program transformation. Our main result is a representation of patternbased focusing in the Twelf logical framework, whose core type theory is too weak to directly encode infinitary rules—although this weakness directly enables socalled “higherorder abstract syntax ” encodings. By applying the systematic defunctionalization transform, not only do we retain the benefits of the higherorder focusing analysis, but we can also take advantage of HOAS within Twelf, ultimately arriving at a proof representation with surprisingly little bureaucracy. 1