Results 1  10
of
15
Recursively Enumerable Reals and Chaitin Ω Numbers
"... A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from b ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from below. We shall study this relation and characterize it in terms of relations between r.e. sets. Solovay's [23]like numbers are the maximal r.e. real numbers with respect to this order. They are random r.e. real numbers. The halting probability ofa universal selfdelimiting Turing machine (Chaitin's Ω number, [9]) is also a random r.e. real. Solovay showed that any Chaitin Ω number islike. In this paper we show that the converse implication is true as well: any Ωlike real in the unit interval is the halting probability of a universal selfdelimiting Turing machine.
Algorithmic Entropy Of Sets
 9] M. FerbusZanda and S. Grigorieff. Is randomness "native" to Computer Science?. Logic in ComputerScience Column. Bulletin of EATCS, vol 74
, 1976
"... In a previous paper a theory of program size formally identical to information theory was developed. The entropy of an individual finite object was defined to be the size in bits of the smallest program for calculating it. It was shown that this is \Gamma log 2 of the probability that the object is ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
In a previous paper a theory of program size formally identical to information theory was developed. The entropy of an individual finite object was defined to be the size in bits of the smallest program for calculating it. It was shown that this is \Gamma log 2 of the probability that the object is obtained by means of a program whose successive bits are chosen by flipping an unbiased coin. Here a theory of the entropy of recursively enumerable sets of objects is proposed which includes the previous theory as the special case of sets having a single element. The primary concept in the generalized theory is the probability that a computing machine enumerates a given set when its program is manufactured by coin flipping. The entropy of a set is defined to be \Gamma log 2 of this probability. 2 G. J. Chaitin 1. Introduction In a classical paper on computability by probabilistic machines [1], de Leeuw et al. showed that if a machine with a random element can enumerate a specific set o...
A Highly Random Number
, 2001
"... In his celebrated 1936 paper Turing defined a machine to be circular iff it performs an infinite computation outputting only finitely many symbols. We define ( as the probability that an arbitrary machine be circular and we prove that is a random number that goes beyond $2, the probability that ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
In his celebrated 1936 paper Turing defined a machine to be circular iff it performs an infinite computation outputting only finitely many symbols. We define ( as the probability that an arbitrary machine be circular and we prove that is a random number that goes beyond $2, the probability that a universal self alelimiting machine halts. The algorithmic complexity of c is strictly greater than that of $2, but similar to the algorithmic complexity of 2 , the halting probability of an oracle machine. What makes ( interesting is that it is an example of a highly random number definable without considering oracles.
Randomness, computability, and density
 SIAM Journal of Computation
, 2002
"... 1 Introduction In this paper we are concerned with effectively generated reals in the interval (0; 1] and their relative randomness. In what follows, real and rational will mean positive real and positive rational, respectively. It will be convenient to work modulo 1, that is, identifying n + ff and ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
1 Introduction In this paper we are concerned with effectively generated reals in the interval (0; 1] and their relative randomness. In what follows, real and rational will mean positive real and positive rational, respectively. It will be convenient to work modulo 1, that is, identifying n + ff and ff for any n 2! and ff 2 (0; 1], and we do this below without further comment.
Another Example of Higher Order Randomness
 FUNDAMENTA INFORMATICAE
, 2002
"... We consider the notion of algorithmic randomness relative to an oracle. We prove that the probability # that a program for infinite computations (a program that never halts) outputs a cofinite set is random in the second jump of the halting problem. Indeed, we prove that # is exactly as random as ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We consider the notion of algorithmic randomness relative to an oracle. We prove that the probability # that a program for infinite computations (a program that never halts) outputs a cofinite set is random in the second jump of the halting problem. Indeed, we prove that # is exactly as random as the halting probability of a universal machine equipped with an oracle for the second jump of the halting problem, in spite of the fact that # is defined without considering oracles.
Kolmogorov complexity for possibly infinite computations
 Journal of Logic, Language and Information
, 2005
"... In this paper we study the Kolmogorov complexity for noneffective computations, that is, either halting or nonhalting computations on Turing machines. This complexity function is defined as the length of the shortest inputs that produce a desired output via a possibly nonhalting computation. Clea ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we study the Kolmogorov complexity for noneffective computations, that is, either halting or nonhalting computations on Turing machines. This complexity function is defined as the length of the shortest inputs that produce a desired output via a possibly nonhalting computation. Clearly this function gives a lower bound of the classical Kolmogorov complexity. In particular, if the machine is allowed to overwrite its output, this complexity coincides with the classical Kolmogorov complexity for halting computations relative to the first jump of the halting problem. However, on machines that cannot erase their output –called monotone machines–, we prove that our complexity for non effective computations and the classical Kolmogorov complexity separate as much as we want. We also consider the prefixfree complexity for possibly infinite computations. We study several properties of the graph of these complexity functions and specially their oscillations with respect to the complexities for effective computations. 1
Kolmogorov complexity and computably enumerable sets
, 2011
"... Abstract. We study the computably enumerable sets in terms of the: (a) Kolmogorov complexity of their initial segments; (b) Kolmogorov complexity of finite programs when they are used as oracles. We present an extended discussion of the existing research on this topic, along with recent developments ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We study the computably enumerable sets in terms of the: (a) Kolmogorov complexity of their initial segments; (b) Kolmogorov complexity of finite programs when they are used as oracles. We present an extended discussion of the existing research on this topic, along with recent developments and open problems. Besides this survey, our main original result is the following characterization of the computably enumerable sets with trivial initial segment prefixfree complexity. A computably enumerable set A is Ktrivial if and only if the family of sets with complexity bounded by the complexity of A is uniformly computable from the halting problem. 1.
Algorithmic Entropy Of Sets
 9] M. FerbusZanda and S. Grigorieff. Is randomness "native" to Computer Science?. Logic in ComputerScience Column. Bulletin of EATCS, vol 74
, 1987
"... In a previous paper a theory of program size formally identical to information theory was developed. The entropy of an individual finite object was defined to be the size in bits of the smallest program for calculating it. It was shown that this is log 2 of the probability that the object is obta ..."
Abstract
 Add to MetaCart
In a previous paper a theory of program size formally identical to information theory was developed. The entropy of an individual finite object was defined to be the size in bits of the smallest program for calculating it. It was shown that this is log 2 of the probability that the object is obtained by means of a program whose successive bits are chosen by flipping an unbiased coin. Here a theory of the entropy of recursively enumerable sets of objects is proposed which includes the previous theory as the special case of sets having a single element. The primary concept in the generalized theory is the probability that a computing machine enumerates a given set when its program is manufactured by coin flipping. The entropy of a set is defined to be log 2 of this probability. 1.