Results 1  10
of
81
Local stability of ergodic averages
 Transactions of the American Mathematical Society
"... We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesguemeasure preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages An ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
(Show Context)
We consider the extent to which one can compute bounds on the rate of convergence of a sequence of ergodic averages. It is not difficult to construct an example of a computable Lebesguemeasure preserving transformation of [0, 1] and a characteristic function f = χA such that the ergodic averages Anf do not converge to a computable element of L2([0,1]). In particular, there is no computable bound on the rate of convergence for that sequence. On the other hand, we show that, for any nonexpansive linear operator T on a separable Hilbert space, and any element f, it is possible to compute a bound on the rate of convergence of (Anf) from T, f, and the norm ‖f ∗ ‖ of the limit. In particular, if T is the Koopman operator arising from a computable ergodic measure preserving transformation of a probability space X and f is any computable element of L2(X), then there is a computable bound on the rate of convergence of the sequence (Anf). The mean ergodic theorem is equivalent to the assertion that for every function K(n) and every ε> 0, there is an n with the property that the ergodic averages Amf are stable to within ε on the interval [n, K(n)]. Even in situations where the sequence (Anf) does not have a computable limit, one can give explicit bounds on such n in terms of K and ‖f‖/ε. This tells us how far one has to search to find an n so that the ergodic averages are “locally stable ” on a large interval. We use these bounds to obtain a similarly explicit version of the pointwise ergodic theorem, and show that our bounds are qualitatively different from ones that can be obtained using upcrossing inequalities due to Bishop and Ivanov. Finally, we explain how our positive results can be viewed as an application of a body of general prooftheoretic methods falling under the heading of “proof mining.” 1
Modified Bar Recursion and Classical Dependent Choice
 In Logic Colloquium 2001
"... We introduce a variant of Spector's bar recursion in nite types to give a realizability interpretation of the classical axiom of dependent choice allowing for the extraction of witnesses from proofs of 1 formulas in classical analysis. We also give a bar recursive denition of the fan functional ..."
Abstract

Cited by 27 (17 self)
 Add to MetaCart
(Show Context)
We introduce a variant of Spector's bar recursion in nite types to give a realizability interpretation of the classical axiom of dependent choice allowing for the extraction of witnesses from proofs of 1 formulas in classical analysis. We also give a bar recursive denition of the fan functional and study the relationship of our variant of bar recursion with others. x1.
Number theory and elementary arithmetic
 Philosophia Mathematica
, 2003
"... Elementary arithmetic (also known as “elementary function arithmetic”) is a fragment of firstorder arithmetic so weak that it cannot prove the totality of an iterated exponential function. Surprisingly, however, the theory turns out to be remarkably robust. I will discuss formal results that show t ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
Elementary arithmetic (also known as “elementary function arithmetic”) is a fragment of firstorder arithmetic so weak that it cannot prove the totality of an iterated exponential function. Surprisingly, however, the theory turns out to be remarkably robust. I will discuss formal results that show that many theorems of number theory and combinatorics are derivable in elementary arithmetic, and try to place these results in a broader philosophical context. 1
Proof Interpretations and the Computational Content of Proofs. Draft of book in preparation
, 2007
"... This survey reports on some recent developments in the project of applying proof theory to proofs in core mathematics. The historical roots, however, go back to Hilbert’s central theme in the foundations of mathematics which can be paraphrased by the following question ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
This survey reports on some recent developments in the project of applying proof theory to proofs in core mathematics. The historical roots, however, go back to Hilbert’s central theme in the foundations of mathematics which can be paraphrased by the following question
Foundational and mathematical uses of higher types
 REFLECTIONS ON THE FOUNDATIONS OF MATHEMATICS: ESSAY IN HONOR OF SOLOMON FEFERMAN
, 1999
"... In this paper we develop mathematically strong systems of analysis in higher types which, nevertheless, are prooftheoretically weak, i.e. conservative over elementary resp. primitive recursive arithmetic. These systems are based on noncollapsing hierarchies ( n WKL+ ; n WKL+ ) of principles ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
In this paper we develop mathematically strong systems of analysis in higher types which, nevertheless, are prooftheoretically weak, i.e. conservative over elementary resp. primitive recursive arithmetic. These systems are based on noncollapsing hierarchies ( n WKL+ ; n WKL+ ) of principles which generalize (and for n = 0 coincide with) the socalled `weak' König's lemma WKL (which has been studied extensively in the context of second order arithmetic) to logically more complex tree predicates. Whereas the second order context used in the program of reverse mathematics requires an encoding of higher analytical concepts like continuous functions F : X ! Y between Polish spaces X;Y , the more exible language of our systems allows to treat such objects directly. This is of relevance as the encoding of F used in reverse mathematics tacitly yields a constructively enriched notion of continuous functions which e.g. for F : IN ! IN can be seen (in our higher order context)
Choice and uniformity in weak applicative theories
 Logic Colloquium ’01
, 2005
"... Abstract. We are concerned with first order theories of operations, based on combinatory logic and extended with the type W of binary words. The theories include forms of “positive ” and “bounded ” induction on W and naturally characterize primitive recursive and polytime functions (respectively). W ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We are concerned with first order theories of operations, based on combinatory logic and extended with the type W of binary words. The theories include forms of “positive ” and “bounded ” induction on W and naturally characterize primitive recursive and polytime functions (respectively). We prove that the recursive content of the theories under investigation (i.e. the associated class of provably total functions on W) is invariant under addition of 1. an axiom of choice for operations and a uniformity principle, restricted to positive conditions; 2. a (form of) selfreferential truth, providing a fixed point theorem for predicates. As to the proof methods, we apply a kind of internal forcing semantics, nonstandard variants of realizability and cutelimination. §1. Introduction. In this paper, we deal with theories of abstract computable operations, underlying the socalled explicit mathematics, introduced by Feferman in the midseventies as a logical frame to formalize Bishop’s style constructive mathematics ([18], [19]). Following a common usage, these theories
Proof mining in L_1approximation
, 2001
"... In this paper we present another case study in the general project of proof mining which means the logical analysis of prima facie noneffective proofs with the aim of extracting new computationally relevant data. We use techniques based on monotone functional interpretation (developed in [17]) to a ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
In this paper we present another case study in the general project of proof mining which means the logical analysis of prima facie noneffective proofs with the aim of extracting new computationally relevant data. We use techniques based on monotone functional interpretation (developed in [17]) to analyze Cheney's simplification [6] of Jackson's original proof [10] from 1921 of the uniqueness of the best L 1 approximation of continuous functions f # C[0, 1] by polynomials p # Pn of degree # n. Cheney's proof is noneffective in the sense that it is based on classical logic and on the noncomputational principle WKL (binary Konig lemma). The result of our analysis provides the first e#ective (in all parameters f, n and #) uniform modulus of uniqueness (a concept which generalizes `strong uniqueness' studied extensively in approximation theory). Moreover, the extracted modulus has the optimal #dependency as follows from Kroo [21]. The paper also describes how the uniform modulus of uniqueness can be used to compute the best L 1 approximations of a fixed f # C[0, 1] with arbitrary precision. We use this result to give a complexity upper bound on the computation of the best L 1 approximation in [24].
Weak theories of nonstandard arithmetic and analysis
 Reverse Mathematics
, 2001
"... Abstract. A general method of interpreting weak highertype theories of nonstandard arithmetic in their standard counterparts is presented. In particular, this provides natural nonstandard conservative extensions of primitive recursive arithmetic, elementary recursive arithmetic, and polynomialtime ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
(Show Context)
Abstract. A general method of interpreting weak highertype theories of nonstandard arithmetic in their standard counterparts is presented. In particular, this provides natural nonstandard conservative extensions of primitive recursive arithmetic, elementary recursive arithmetic, and polynomialtime computable arithmetic. A means of formalizing basic real analysis in such theories is sketched. §1. Introduction. Nonstandard analysis, as developed by Abraham Robinson, provides an elegant paradigm for the application of metamathematical ideas in mathematics. The idea is simple: use modeltheoretic methods to build rich extensions of a mathematical structure, like secondorder arithmetic or a universe of sets; reason about what is true in these enriched structures;