Results 1  10
of
12
Recursively Enumerable Reals and Chaitin Ω Numbers
"... A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from b ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from below. We shall study this relation and characterize it in terms of relations between r.e. sets. Solovay's [23]like numbers are the maximal r.e. real numbers with respect to this order. They are random r.e. real numbers. The halting probability ofa universal selfdelimiting Turing machine (Chaitin's Ω number, [9]) is also a random r.e. real. Solovay showed that any Chaitin Ω number islike. In this paper we show that the converse implication is true as well: any Ωlike real in the unit interval is the halting probability of a universal selfdelimiting Turing machine.
Recursive computational depth
 Information and Computation
, 1999
"... In the 1980's, Bennett introduced computational depth as a formal measure of the amount of computational history that is evident in an object's structure. In particular, Bennett identi ed the classes of weakly deep and strongly deep sequences, and showed that the halting problem is strongly deep. Ju ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
In the 1980's, Bennett introduced computational depth as a formal measure of the amount of computational history that is evident in an object's structure. In particular, Bennett identi ed the classes of weakly deep and strongly deep sequences, and showed that the halting problem is strongly deep. Juedes, Lathrop, and Lutz subsequently extended this result by de ning the class of weakly useful sequences, and proving that every weakly useful sequence is strongly deep. The present paper investigates re nements of Bennett's notions of weak and strong depth, called recursively weak depth (introduced by Fenner, Lutz and Mayordomo) and recursively strong depth (introduced here). It is argued that these re nements naturally capture Bennett's idea that deep objects are those which \contain internal evidence of a nontrivial causal history. " The fundamental properties of recursive computational depth are developed, and it is shown that the recursively weakly (respectively, strongly) deep sequences form a proper subclass of the class of weakly (respectively, strongly) deep sequences. The abovementioned theorem of Juedes, Lathrop, and Lutz is then strengthened by proving that every weakly useful sequence is recursively strongly deep. It follows from these results that not every strongly deep sequence is weakly useful, thereby answering a question posed by Juedes.
Computational depth and reducibility
 Theoretical Computer Science
, 1994
"... This paper reviews and investigates Bennett's notions of strong and weak computational depth (also called logical depth) for in nite binary sequences. Roughly, an in nite binary sequence x is de ned to be weakly useful if every element of a nonnegligible set of decidable sequences is reducible to x ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This paper reviews and investigates Bennett's notions of strong and weak computational depth (also called logical depth) for in nite binary sequences. Roughly, an in nite binary sequence x is de ned to be weakly useful if every element of a nonnegligible set of decidable sequences is reducible to x in recursively bounded time. It is shown that every weakly useful sequence is strongly deep. This result (which generalizes Bennett's observation that the halting problem is strongly deep) implies that every high Turing degree contains strongly deep sequences. It is also shown that, in the sense of Baire category, almost
A Characterization of C.E. Random Reals
 THEORETICAL COMPUTER SCIENCE
, 1999
"... A real # is computably enumerable if it is the limit of a computable, increasing, converging sequence of rationals; # is random if its binary expansion is a random sequence. Our aim is to offer a selfcontained proof, based on the papers [7, 14, 4, 13], of the following theorem: areal is c.e. and ra ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
A real # is computably enumerable if it is the limit of a computable, increasing, converging sequence of rationals; # is random if its binary expansion is a random sequence. Our aim is to offer a selfcontained proof, based on the papers [7, 14, 4, 13], of the following theorem: areal is c.e. and random if and only if it a Chaitin# real, i.e., the halting probability of some universal selfdelimiting Turing machine.
Some basic problems with complex systems
, 1999
"... From an engineering perspective, it is well known that there are numerous problems to predict and control complex systems. In addition, there are also problems to understand the concept of complexity from the perspective of physics and the epistemology of physics. Three outstanding topics in this re ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
From an engineering perspective, it is well known that there are numerous problems to predict and control complex systems. In addition, there are also problems to understand the concept of complexity from the perspective of physics and the epistemology of physics. Three outstanding topics in this regard are discussed: 1. the fundamental contextdependence of the de nition of complexity, 2. the relation between complexity and meaning, and 3. the restrictions on the applicability of limit theorems in the study of complex systems. 1
A Glimpse into Algorithmic Information Theory
 Logic, Language and Computation, Volume 3, CSLI Series
, 1999
"... This paper is a subjective, short overview of algorithmic information theory. We critically discuss various equivalent algorithmical models of randomness motivating a #randomness hypothesis". Finally some recent results on computably enumerable random reals are reviewed. 1 Randomness: An Informa ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
This paper is a subjective, short overview of algorithmic information theory. We critically discuss various equivalent algorithmical models of randomness motivating a #randomness hypothesis". Finally some recent results on computably enumerable random reals are reviewed. 1 Randomness: An Informal Discussion In which we discuss some di#culties arising in de#ning randomness. Suppose that one is watching a simple pendulum swing back and forth, recording 0 if it swings clockwise at a given instant and 1 if it swings counterclockwise. Suppose further that after some time the record looks as follows: 10101010101010101010101010101010: At this point one would like to deduce a #theory" from the experiment. 1 The #theory" should account for the data presently available and make #predictions" about future observations. How should one proceed? It is obvious that there are many #theories" that one could writedown for the given data, for example: 10101010101010101010101010101010000000000...
Resource Bounded Randomness and Computational Complexity
 Theoretical Computer Science
, 1997
"... We give a survey of resource bounded randomness concepts and show their relations to each other. Moreover, we introduce several new resource bounded randomness concepts corresponding to the classical randomness concepts. We show that the notion of polynomial time bounded Ko randomness is independent ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We give a survey of resource bounded randomness concepts and show their relations to each other. Moreover, we introduce several new resource bounded randomness concepts corresponding to the classical randomness concepts. We show that the notion of polynomial time bounded Ko randomness is independent of the notions of polynomial time bounded Lutz, Schnorr and Kurtz randomness. Lutz has conjectured that, for a given time or space bound, the corresponding resource bounded Lutz randomness is a proper refinement of resource bounded Schnorr randomness. We answer this conjecture for the case of polynomial time bound in this paper. Moreover, we show that polynomial time bounded Schnorr randomness is a proper refinement of polynomial time bounded Kurtz randomness too. In contrast to this result, however, we also show that the notions of polynomial time bounded Lutz, Schnorr and Kurtz randomness coincide in the case of recursive sets, whence it suffices to study the notion of resource bounded Lu...
Recursivelyenumerable reals and Chaitin www.elsevier.com/locate/tcs numbers �;��
, 1998
"... Communicated byM. Ito A real is called recursivelyenumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay(unpublished manuscript, IBM Thomas J. Watson ..."
Abstract
 Add to MetaCart
Communicated byM. Ito A real is called recursivelyenumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay(unpublished manuscript, IBM Thomas J. Watson
The Global Power of . . .
"... It is shown that, for every k 0 and every xed algorithmically random language B, there is a language that is polynomial time, truthtable reducible in k + 1 queries to B but not truthtable reducible in k queries in any amount of time to any algorithmically random language C. In particular, this yi ..."
Abstract
 Add to MetaCart
It is shown that, for every k 0 and every xed algorithmically random language B, there is a language that is polynomial time, truthtable reducible in k + 1 queries to B but not truthtable reducible in k queries in any amount of time to any algorithmically random language C. In particular, this yields the separation P ktt(RAND) $ P (k+1)tt(RAND), where RAND is the set of all algorithmically random languages.
RANDOM SCATTERING OF BITS BY PREDICTION
, 909
"... Abstract. We investigate a population of binary mistake sequences that result from learning with parametric models of di erent order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the ..."
Abstract
 Add to MetaCart
Abstract. We investigate a population of binary mistake sequences that result from learning with parametric models of di erent order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the learner's information density parameter which is de ned as the ratio between the lengths of the compressed to uncompressed les that contain the learner's decision rule. The results indicate that good learners have a low information densityρ while bad learners have a high ρ. Bad learners generate atypically chaotic mistake sequences while good learners generate typically chaotic sequences that divide into two subgroups: the rst consists of the typically stochastic sequences (with low divergence) which includes the sequences generated by the Bayes optimal predictor. The second subgroup consists of the atypically stochastic (but still typically chaotic) sequences that deviate from truly random Bernoulli sequences. Based on the static algorithmic interference model of [15] the learner here acts as a static structure which scatters the