Results 1  10
of
10
Effective Fractal Dimension in Algorithmic Information Theory
, 2006
"... Hausdorff dimension assigns a dimension value to each subset of an arbitrary metric space. In Euclidean space, this concept coincides with our intuition that ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
Hausdorff dimension assigns a dimension value to each subset of an arbitrary metric space. In Euclidean space, this concept coincides with our intuition that
Finitestate dimension and lossy decompressors
, 2006
"... Abstract This paper examines informationtheoretic questions regarding the difficulty of compressing data versus the difficulty of decompressing data and the role that information loss plays in this interaction. Finitestate compression and decompression are shown to be of equivalent difficulty, eve ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract This paper examines informationtheoretic questions regarding the difficulty of compressing data versus the difficulty of decompressing data and the role that information loss plays in this interaction. Finitestate compression and decompression are shown to be of equivalent difficulty, even when the decompressors are allowed to be lossy.
Base Invariance of Feasible Dimension
, 2013
"... Effective fractal dimensions were introduced by Lutz (2003) in order to study the dimensions of individual sequences and quantitatively analyze the structure of complexity classes. Interesting connections of effective dimensions with information theory were also found, implying that constructive dim ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Effective fractal dimensions were introduced by Lutz (2003) in order to study the dimensions of individual sequences and quantitatively analyze the structure of complexity classes. Interesting connections of effective dimensions with information theory were also found, implying that constructive dimension as well as polynomialspace dimension are invariant under basechange while finitestate dimension is not. We consider the intermediate case, polynomialtime dimension, and prove that it is indeed invariant under basechange by a nontrivial argument which is quite different from the Kolmogorov complexity ones used in the other cases. Polynomialtime dimension can be characterized in terms of predictionlossrate, entropy, and compression algorithms. Our result implies that in an asymptotic way each of these concepts is invariant under basechange. A corollary of the main theorem is any polynomialtime dimension 1 number (which may be established in any base) is an absolutely normal number, providing an interesting source of absolute normality.
Feasible Depth
"... Abstract. This paper introduces two complexitytheoretic formulations of Bennett’s computational depth: finitestate depth and polynomialtime depth. It is shown that for both formulations, trivial and random infinite sequences are shallow, and a slow growth law holds, implying that deep sequences c ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. This paper introduces two complexitytheoretic formulations of Bennett’s computational depth: finitestate depth and polynomialtime depth. It is shown that for both formulations, trivial and random infinite sequences are shallow, and a slow growth law holds, implying that deep sequences cannot be created easily from shallow sequences. Furthermore, the E analogue of the halting language is shown to be polynomialtime deep, by proving a more general result: every language to which a nonnegligible subset of E can be reduced in uniform exponential time is polynomialtime deep.
Finitestate dimension and real arithmetic
 IN PROCEEDINGS OF THE 33RD INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES, AND PROGRAMMING, LECTURE NOTES N COMPUTER SCIENCE
, 2006
"... We use entropy rates and Schur concavity to prove that, for every integer k ≥ 2, every nonzero rational number q, and every real number α, the basek expansions of α, q + α, and qα all have the same finitestate dimension and the same finitestate strong dimension. This extends, and gives a new pro ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We use entropy rates and Schur concavity to prove that, for every integer k ≥ 2, every nonzero rational number q, and every real number α, the basek expansions of α, q + α, and qα all have the same finitestate dimension and the same finitestate strong dimension. This extends, and gives a new proof of, Wall’s 1949 theorem stating that the sum or product of a nonzero rational number and a Borel normal number is always Borel normal.
A Divergence Formula for Randomness and Dimension
"... If S is an infinite sequence over a finite alphabet Σ and β is a probability measure on Σ, then the dimension of S with respect to β, written dim β (S), is a constructive version of Billingsley dimension that coincides with the (constructive Hausdorff) dimension dim(S) when β is the uniform probabil ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
If S is an infinite sequence over a finite alphabet Σ and β is a probability measure on Σ, then the dimension of S with respect to β, written dim β (S), is a constructive version of Billingsley dimension that coincides with the (constructive Hausdorff) dimension dim(S) when β is the uniform probability measure. This paper shows that dim β (S) and its dual Dim β (S), the strong dimension of S with respect to β, can be used in conjunction with randomness to measure the similarity of two probability measures α and β on Σ. Specifically, we prove that the divergence formula dim β (R) = Dim β (R) =
Computability Theory, Algorithmic Randomness and Turing’s Anticipation
"... Abstract. This article looks at the applications of Turing’s Legacy in computation, particularly to the theory of algorithmic randomness, where classical mathematical concepts such as measure could be made computational. It also traces Turing’s anticipation of this theory in an early manuscript. 1 ..."
Abstract
 Add to MetaCart
Abstract. This article looks at the applications of Turing’s Legacy in computation, particularly to the theory of algorithmic randomness, where classical mathematical concepts such as measure could be made computational. It also traces Turing’s anticipation of this theory in an early manuscript. 1
Turing and Randomness
"... Abstract. In an unpublished manuscript, Turing anticipated the basic ideas behind the theory of algorithmic randomness. He did so by nearly 30 years. Turing used a computationally constrained version of “measure theory ” to answer a question of Borel in number theory. This question concerned constru ..."
Abstract
 Add to MetaCart
Abstract. In an unpublished manuscript, Turing anticipated the basic ideas behind the theory of algorithmic randomness. He did so by nearly 30 years. Turing used a computationally constrained version of “measure theory ” to answer a question of Borel in number theory. This question concerned constructing what are called “absolutely normal ” numbers. In this article, I will try to explain what these mysterious terms mean, and what Turing did. 1 Borel, number theory and normality 1.1 Repeated decimals in fractions Mathematicians have always been fascinated with patterns in numbers. At a very early stage of our education, we learn about the special nature of about decimal expansions of rational numbers. Recall that a real number is rational if it is a fraction: it can be expressed as p q for some integers p, q. The reader might remember from school, or maybe first year university, that numbers like √ 2 are not rational, and it can be shown that “most ” numbers (in a precise mathematical sense) are irrational. Long ago, the Greeks showed that a real number between 0 and 1 is rational if and only if it has a finite decimal expansion, or a decimal expansion which repeats from some point onwards. For example, 1
Normality and differentiability
, 2012
"... A recent theorem of Brattka, Miller and Nies [1] shows that a real number r in the unit interval is computably random if and only if every nondecreasing computable function from the unit interval to the real numbers is differentiable at r. Here we establish a counterpart result that characterizes no ..."
Abstract
 Add to MetaCart
A recent theorem of Brattka, Miller and Nies [1] shows that a real number r in the unit interval is computably random if and only if every nondecreasing computable function from the unit interval to the real numbers is differentiable at r. Here we establish a counterpart result that characterizes normality to a given base in terms of differentiability of functions computable with finite transducers (injective finite state automata). For a real number r we consider the unique expansion in base b of the form r = ⌊x ⌋ + n=1 anb −n where the integers 0 ≤ an < b, and an < b − 1 infinitely many times. This last condition over an ensures a unique representation of every rational number. Let us recall that Borel’s original definition of normality in [2] is equivalent to the following simpler one [3]. Definition. A real number r is simply normal to a given base b if each digit in {0, 1,.., (b−1)} occurs with the same limiting frequency 1/b in the expansion of r in base b. A number is normal to base b if it is simply normal to the each base b i, for very positive integer i. For a finite set of symbols A we write A ∗ and A ω to denote, respectively, the set of finite and infinite sequences of symbols in A, Definition. (1) A finite state transducer is a 4uple C = 〈Q, q0, δ, o〉, where Q is a finite set of states, q0 ∈ Q is the initial state, δ: Q × A → Q is the transition function and o: Q × A → A ∗ is the output function. A finite state transducer processes the input symbols according to the current state q. When a symbol a ∈ A is read, the automaton moves to state δ(q, a) and outputs o(q, a). The extension of δ and o to process strings are δ ∗ : Q × A ∗ → Q and o ∗ : Q × A ∗ → A ∗ such that, for a ∈ A, s ∈ A ∗ and λ the empty string, δ ∗ (q, λ) = q,
Base Invariance of Feasible Dimension
, 2014
"... Effective fractal dimensions were introduced by Lutz (2003) in order to study the dimensions of individual sequences and quantitatively analyze the structure of complexity classes. Interesting connections of effective dimensions with information theory were also found, implying that constructive dim ..."
Abstract
 Add to MetaCart
Effective fractal dimensions were introduced by Lutz (2003) in order to study the dimensions of individual sequences and quantitatively analyze the structure of complexity classes. Interesting connections of effective dimensions with information theory were also found, implying that constructive dimension as well as polynomialspace dimension are invariant under basechange while finitestate dimension is not. We consider the intermediate case, polynomialtime dimension, and prove that it is indeed invariant under basechange by a nontrivial argument which is quite different from the Kolmogorov complexity ones used in the other cases. Polynomialtime dimension can be characterized in terms of predictionlossrate, entropy, and compression algorithms. Our result implies that in an asymptotic way each of these concepts is invariant under basechange. A corollary of the main theorem is any polynomialtime dimension 1 number (which may be established in any base) is an absolutely normal number, providing an interesting source of absolute normality. 1