Results 1  10
of
39
Relativizing Chaitin’s halting probability
 J. Math. Log
"... Abstract. As a natural example of a 1random real, Chaitin proposed the halting probability Ω of a universal prefixfree machine. We can relativize this example by considering a universal prefixfree oracle machine U. Let Ω A U be the halting probability of U A; this gives a natural uniform way of p ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
(Show Context)
Abstract. As a natural example of a 1random real, Chaitin proposed the halting probability Ω of a universal prefixfree machine. We can relativize this example by considering a universal prefixfree oracle machine U. Let Ω A U be the halting probability of U A; this gives a natural uniform way of producing an Arandom real for every A ∈ 2 ω. It is this operator which is our primary object of study. We can draw an analogy between the jump operator from computability theory and this Omega operator. But unlike the jump, which is invariant (up to computable permutation) under the choice of an effective enumeration of the partial computable functions, Ω A U can be vastly different for different choices of U. Even for a fixed U, there are oracles A = ∗ B such that Ω A U and Ω B U are 1random relative to each other. We prove this and many other interesting properties of Omega operators. We investigate these operators from the perspective of analysis, computability theory, and of course, algorithmic randomness. 1.
A statistical mechanical interpretation of algorithmic information theory III: Composite systems and fixed points
, 2009
"... ..."
Exact Approximations of Omega Numbers
, 2006
"... A Chaitin Omega number is the halting probability of a universal prefixfree Turing machine. Every Omega number is simultaneously computably enumerable (the limit of a computable, increasing, converging sequence of rationals), and algorithmically random (its binary expansion is an algorithmic random ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
A Chaitin Omega number is the halting probability of a universal prefixfree Turing machine. Every Omega number is simultaneously computably enumerable (the limit of a computable, increasing, converging sequence of rationals), and algorithmically random (its binary expansion is an algorithmic random sequence), hence uncomputable. The value of an Omega number is highly machinedependent. In general, no more than finitely many scattered bits of the binary expansion of an Omega number can be exactly computed; but, in some cases, it is possible to prove that no bit can be computed. In this paper we will simplify and improve both the method and its correctness proof proposed in an earlier paper, and we will compute the exact approximations of two Omega numbers of the same prefixfree Turing machine, which is universal when used with data in base 16 or base 2: we compute 43 exact bits for the base 16 machine and 40 exact bits for the base 2 machine.
The Kdegrees, low for K degrees, and weakly low for K sets
"... Abstract. We call A weakly low for K if there is a c such that KA(σ) ≥ K(σ)−c for infinitely many σ; in other words, there are infinitely many strings that A does not help compress. We prove that A is weakly low for K iff Chaitin’s Ω is Arandom. This has consequences in the Kdegrees and the low fo ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We call A weakly low for K if there is a c such that KA(σ) ≥ K(σ)−c for infinitely many σ; in other words, there are infinitely many strings that A does not help compress. We prove that A is weakly low for K iff Chaitin’s Ω is Arandom. This has consequences in the Kdegrees and the low for K (i.e., low for random) degrees. Furthermore, we prove that the initial segment prefixfree complexity of 2random reals is infinitely often maximal. This had previously been proved for plain Kolmogorov complexity. 1.
On approximating realworld halting problems
 Reischuk (Eds.), Proc. FCT 2005, in: Lectures Notes Comput. Sci
, 2005
"... Abstract. No algorithm can of course solve the Halting Problem, that is, decide within finite time always correctly whether a given program halts on a certain given input. It might however be able to give correct answers for ‘most ’ instances and thus solve it at least approximately. Whether and how ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract. No algorithm can of course solve the Halting Problem, that is, decide within finite time always correctly whether a given program halts on a certain given input. It might however be able to give correct answers for ‘most ’ instances and thus solve it at least approximately. Whether and how well such approximations are feasible highly depends on the underlying encodings and in particular the Gödelization (programming system) which in practice usually arises from some programming language. We consider BrainF*ck (BF), a simple yet Turingcomplete realworld programming language over an eight letter alphabet, and prove that the natural enumeration of its syntactically correct sources codes induces a both efficient and dense Gödelization in the sense of [Jakoby&Schindelhauer’99]. It follows that any algorithm M approximating the Halting Problem for BF errs on at least a constant fraction εM> 0 of all instances of size n for infinitely many n. Next we improve this result by showing that, in every dense Gödelization, this constant lower bound ε to be independent of M; while, the other hand, the Halting Problem does admit approximation up to arbitrary fraction δ> 0byan appropriate algorithm M δ handling instances of size n for infinitely many n. The last two results complement work by [Lynch’74]. 1
Upper bound by Kolmogorov complexity for the probability in computable quantum measurement
 In: Proceedings 5th Conference on Real Numbers and Computers
"... Abstract. We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standar ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. We apply algorithmic information theory to quantum mechanics in order to shed light on an algorithmic structure which inheres in quantum mechanics. There are two equivalent ways to define the (classical) Kolmogorov complexity K(s) of a given classical finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal selfdelimiting Turing machine to output s. In the other way, we first introduce the socalled universal probability m, and then define K(s) as − log 2 m(s) without using the concept of programsize. We generalize the universal probability to a matrixvalued function, and identify this function with a POVM (positive operatorvalued measure). On the basis of this identification, we study a computable POVM measurement with countable measurement outcomes performed upon a finite dimensional quantum system. We show that, up to a multiplicative constant, 2 −K(s) is the upper bound for the probability of each measurement outcome s in such a POVM measurement. In what follows, the upper bound 2 −K(s) is shown to be optimal in a certain sense.
The Tsallis entropy and the Shannon entropy of a universal probability
 Proceedings of ISIT 2008. IEEE International Symposium on Information Theory, 2008
"... Abstract — We study the properties of Tsallis entropy and Shannon entropy from the point of view of algorithmic randomness. In algorithmic information theory, there are two equivalent ways to define the programsize complexity K(s) of a given finite binary string s. In the standard way, K(s) is defi ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We study the properties of Tsallis entropy and Shannon entropy from the point of view of algorithmic randomness. In algorithmic information theory, there are two equivalent ways to define the programsize complexity K(s) of a given finite binary string s. In the standard way, K(s) is defined as the length of the shortest input string for the universal selfdelimiting Turing machine to output s. In the other way, the socalled universal probability m is introduced first, and then K(s) is defined as − log 2 m(s) without reference to the concept of programsize. In this paper, we investigate the properties of the Shannon entropy, the power sum, and the Tsallis entropy of a universal probability by means of the notion of programsize complexity. We determine the convergence or divergence of each of these three quantities, and evaluate its degree of randomness if it converges. I.
Some Results on Effective Randomness
, 2003
"... We investigate the characterizations of effective randomness in terms of MartinLöf covers and martingales. First, we address a question of AmbosSpies and Kucera [1], who asked for a characterization of computable randomness in terms of covers. We argue that computable... ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We investigate the characterizations of effective randomness in terms of MartinLöf covers and martingales. First, we address a question of AmbosSpies and Kucera [1], who asked for a characterization of computable randomness in terms of covers. We argue that computable...
Algorithmic Randomness and Computability
"... Abstract. We examine some recent work which has made significant progress in out understanding of algorithmic randomness, relative algorithmic randomness and their relationship with algorithmic computability and relative algorithmic computability. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We examine some recent work which has made significant progress in out understanding of algorithmic randomness, relative algorithmic randomness and their relationship with algorithmic computability and relative algorithmic computability.