Results 1  10
of
52
Local stability of ergodic averages
 Transactions of the American Mathematical Society
"... We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesguemeasure preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages An ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesguemeasure preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages Anf do not converge to a computable element of L2([0,1]). In particular, there is no computable bound on the rate of convergence for that sequence. On the other hand, we show that, for any nonexpansive linear operator T on a separable Hilbert space, and any element f, it is possible to compute a bound on the rate of convergence of (Anf) from T, f, and the norm ‖f ∗ ‖ of the limit. In particular, if T is the Koopman operator arising from a computable ergodic measure preserving transformation of a probability space X and f is any computable element of L2(X), then there is a computable bound on the rate of convergence of the sequence (Anf). The mean ergodic theorem is equivalent to the assertion that for every function K(n) and every ε> 0, there is an n with the property that the ergodic averages Amf are stable to within ε on the interval [n, K(n)]. Even in situations where the sequence (Anf) does not have a computable limit, one can give explicit bounds on such n in terms of K and ‖f‖/ε. This tells us how far one has to search to find an n so that the ergodic averages are “locally stable ” on a large interval. We use these bounds to obtain a similarly explicit version of the pointwise ergodic theorem, and show that our bounds are qualitatively different from ones that can be obtained using upcrossing inequalities due to Bishop and Ivanov. Finally, we explain how our positive results can be viewed as an application of a body of general prooftheoretic methods falling under the heading of “proof mining.” 1
Number theory and elementary arithmetic
 Philosophia Mathematica
, 2003
"... Elementary arithmetic (also known as “elementary function arithmetic”) is a fragment of firstorder arithmetic so weak that it cannot prove the totality of an iterated exponential function. Surprisingly, however, the theory turns out to be remarkably robust. I will discuss formal results that show t ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Elementary arithmetic (also known as “elementary function arithmetic”) is a fragment of firstorder arithmetic so weak that it cannot prove the totality of an iterated exponential function. Surprisingly, however, the theory turns out to be remarkably robust. I will discuss formal results that show that many theorems of number theory and combinatorics are derivable in elementary arithmetic, and try to place these results in a broader philosophical context. 1
Foundational and mathematical uses of higher types
 REFLECTIONS ON THE FOUNDATIONS OF MATHEMATICS: ESSAY IN HONOR OF SOLOMON FEFERMAN
, 1999
"... In this paper we develop mathematically strong systems of analysis in higher types which, nevertheless, are prooftheoretically weak, i.e. conservative over elementary resp. primitive recursive arithmetic. These systems are based on noncollapsing hierarchies ( n WKL+ ; n WKL+ ) of principles ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this paper we develop mathematically strong systems of analysis in higher types which, nevertheless, are prooftheoretically weak, i.e. conservative over elementary resp. primitive recursive arithmetic. These systems are based on noncollapsing hierarchies ( n WKL+ ; n WKL+ ) of principles which generalize (and for n = 0 coincide with) the socalled `weak' König's lemma WKL (which has been studied extensively in the context of second order arithmetic) to logically more complex tree predicates. Whereas the second order context used in the program of reverse mathematics requires an encoding of higher analytical concepts like continuous functions F : X ! Y between Polish spaces X;Y , the more exible language of our systems allows to treat such objects directly. This is of relevance as the encoding of F used in reverse mathematics tacitly yields a constructively enriched notion of continuous functions which e.g. for F : IN ! IN can be seen (in our higher order context)
Choice and uniformity in weak applicative theories
 Logic Colloquium ’01
, 2005
"... Abstract. We are concerned with first order theories of operations, based on combinatory logic and extended with the type W of binary words. The theories include forms of “positive ” and “bounded ” induction on W and naturally characterize primitive recursive and polytime functions (respectively). W ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. We are concerned with first order theories of operations, based on combinatory logic and extended with the type W of binary words. The theories include forms of “positive ” and “bounded ” induction on W and naturally characterize primitive recursive and polytime functions (respectively). We prove that the recursive content of the theories under investigation (i.e. the associated class of provably total functions on W) is invariant under addition of 1. an axiom of choice for operations and a uniformity principle, restricted to positive conditions; 2. a (form of) selfreferential truth, providing a fixed point theorem for predicates. As to the proof methods, we apply a kind of internal forcing semantics, nonstandard variants of realizability and cutelimination. §1. Introduction. In this paper, we deal with theories of abstract computable operations, underlying the socalled explicit mathematics, introduced by Feferman in the midseventies as a logical frame to formalize Bishop’s style constructive mathematics ([18], [19]). Following a common usage, these theories
On the Uniform Weak König's Lemma
, 1999
"... The socalled weak König's lemma WKL asserts the existence of an in nite path b in any in nite binary tree (given by a representing function f ). Based on this principle one can formulate subsystems of higherorder arithmetic which allow to carry out very substantial parts of classical mathematics b ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
The socalled weak König's lemma WKL asserts the existence of an in nite path b in any in nite binary tree (given by a representing function f ). Based on this principle one can formulate subsystems of higherorder arithmetic which allow to carry out very substantial parts of classical mathematics but are 2  conservative over primitive recursive arithmetic PRA (and even weaker fragments of arithmetic). In [10] we established such conservation results relative to nite type extensions PRA of PRA (together with a quanti erfree axiom of choice schema). In this setting one can consider also a uniform version UWKL of WKL which asserts the existence of a functional which selects uniformly in a given in nite binary tree f an in nite path f of that tree. This uniform version of WKL is of interest in the context of explicit mathematics as developed by S. Feferman. The elimination process in [10] actually can be used to eliminate even this uniform weak König's lemma provided that PRA only has a quanti erfree rule of extensionality QFER instead of the full axioms (E) of extensionality for all nite types. In this paper we show that in the presence of (E), UWKL is much stronger than WKL: whereas WKL remains to be 2 conservative over PRA, PRA + (E)+UWKL contains (and is conservative over) full Peano arithmetic PA.
Proof mining in L_1approximation
, 2001
"... In this paper we present another case study in the general project of proof mining which means the logical analysis of prima facie noneffective proofs with the aim of extracting new computationally relevant data. We use techniques based on monotone functional interpretation (developed in [17]) to a ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
In this paper we present another case study in the general project of proof mining which means the logical analysis of prima facie noneffective proofs with the aim of extracting new computationally relevant data. We use techniques based on monotone functional interpretation (developed in [17]) to analyze Cheney's simplification [6] of Jackson's original proof [10] from 1921 of the uniqueness of the best L 1 approximation of continuous functions f # C[0, 1] by polynomials p # Pn of degree # n. Cheney's proof is noneffective in the sense that it is based on classical logic and on the noncomputational principle WKL (binary Konig lemma). The result of our analysis provides the first e#ective (in all parameters f, n and #) uniform modulus of uniqueness (a concept which generalizes `strong uniqueness' studied extensively in approximation theory). Moreover, the extracted modulus has the optimal #dependency as follows from Kroo [21]. The paper also describes how the uniform modulus of uniqueness can be used to compute the best L 1 approximations of a fixed f # C[0, 1] with arbitrary precision. We use this result to give a complexity upper bound on the computation of the best L 1 approximation in [24].
Proof Interpretations and the Computational Content of Proofs. Draft of book in preparation
, 2007
"... This survey reports on some recent developments in the project of applying proof theory to proofs in core mathematics. The historical roots, however, go back to Hilbert’s central theme in the foundations of mathematics which can be paraphrased by the following question ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This survey reports on some recent developments in the project of applying proof theory to proofs in core mathematics. The historical roots, however, go back to Hilbert’s central theme in the foundations of mathematics which can be paraphrased by the following question
Computational interpretations of analysis via products of selection functions
 CIE 2010, INVITED TALK ON SPECIAL SESSION “PROOF THEORY AND COMPUTATION
, 2010
"... We show that the computational interpretation of full comprehension via two wellknown functional interpretations (dialectica and modified realizability) corresponds to two closely related infinite products of selection functions. ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
We show that the computational interpretation of full comprehension via two wellknown functional interpretations (dialectica and modified realizability) corresponds to two closely related infinite products of selection functions.