Results 1  10
of
26
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
 IEEE Transactions on Information Theory
, 1998
"... The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic conditi ..."
Abstract

Cited by 66 (7 self)
 Add to MetaCart
The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's mi...
ResourceBounded Balanced Genericity, Stochasticity and Weak Randomness
 In Complexity, Logic, and Recursion Theory
, 1996
"... . We introduce balanced t(n)genericity which is a refinement of the genericity concept of AmbosSpies, Fleischhack and Huwig [2] and which in addition controls the frequency with which a condition is met. We show that this concept coincides with the resourcebounded version of Church's stochas ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
. We introduce balanced t(n)genericity which is a refinement of the genericity concept of AmbosSpies, Fleischhack and Huwig [2] and which in addition controls the frequency with which a condition is met. We show that this concept coincides with the resourcebounded version of Church's stochasticity [6]. By uniformly describing these concepts and weaker notions of stochasticity introduced by Wilber [19] and Ko [11] in terms of prediction functions, we clarify the relations among these resourcebounded stochasticity concepts. Moreover, we give descriptions of these concepts in the framework of Lutz's resourcebounded measure theory [13] based on martingales: We show that t(n)stochasticity coincides with a weak notion of t(n)randomness based on socalled simple martingales but that it is strictly weaker than t(n)randomness in the sense of Lutz. 1 Introduction Over the last years resourcebounded versions of Baire category and Lebesgue measure have been introduced in complexity theor...
Applying MDL to Learning Best Model Granularity
, 1994
"... The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a twopart code of the data set: this embodies "Occam's Razor." In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Base...
Algorithmic randomness of closed sets
 J. LOGIC AND COMPUTATION
, 2007
"... We investigate notions of randomness in the space C[2 N] of nonempty closed subsets of {0, 1} N. A probability measure is given and a version of the MartinLöf test for randomness is defined. Π 0 2 random closed sets exist but there are no random Π 0 1 closed sets. It is shown that any random 4 clos ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We investigate notions of randomness in the space C[2 N] of nonempty closed subsets of {0, 1} N. A probability measure is given and a version of the MartinLöf test for randomness is defined. Π 0 2 random closed sets exist but there are no random Π 0 1 closed sets. It is shown that any random 4 closed set is perfect, has measure 0, and has box dimension log2. A 3 random closed set has no nc.e. elements. A closed subset of 2 N may be defined as the set of infinite paths through a tree and so the problem of compressibility of trees is explored. If Tn = T ∩ {0, 1} n, then for any random closed set [T] where T has no dead ends, K(Tn) ≥ n − O(1) but for any k, K(Tn) ≤ 2 n−k + O(1), where K(σ) is the prefixfree complexity of σ ∈ {0, 1} ∗.
On Prediction by Data Compression
 In 9th European Conference on Machine Learning. Lecture Notes in Artificial Intelligence
, 1997
"... . Traditional wisdom has it that the better a theory compresses the learning data concerning some phenomenon under investigation, the better we learn, generalize, and the better the theory predicts unknown data. This belief is vindicated in practice but apparently has not been rigorously proved in a ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
. Traditional wisdom has it that the better a theory compresses the learning data concerning some phenomenon under investigation, the better we learn, generalize, and the better the theory predicts unknown data. This belief is vindicated in practice but apparently has not been rigorously proved in a general setting. Making these ideas rigorous involves the length of the shortest effective description of an individual object: its Kolmogorov complexity. In a previous paper we have shown that optimal compression is almost always a best strategy in hypotheses identification (an ideal form of the minimum description length (MDL) principle). Whereas the single best hypothesis does not necessarily give the best prediction, we demonstrate that nonetheless compression is almost always the best strategy in prediction methods in the style of R. Solomonoff. 1 Introduction Given a body of data concerning some phenomenon under investigation, we want to select the most plausible hypothesis from amon...
Effectively Closed Sets
 ASL Lecture Notes in Logic
"... Abstract. We investigate notions of randomness in the space C[2 IN] of nonempty closed subsets of {0, 1} IN. A probability measure is given and a version of the MartinLöf Test for randomness is defined. Π 0 2 random closed sets exist but there are no random Π 0 1 closed sets. It is shown that a ran ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract. We investigate notions of randomness in the space C[2 IN] of nonempty closed subsets of {0, 1} IN. A probability measure is given and a version of the MartinLöf Test for randomness is defined. Π 0 2 random closed sets exist but there are no random Π 0 1 closed sets. It is shown that a random closed set is perfect, has measure 0, and has no computable elements. A closed subset of 2 IN may be defined as the set of infinite paths through a tree and so the problem of compressibility of trees is explored. This leads to some results on a Chaitinstyle notion of randomness for closed sets. 1
The Kolmogorov Complexity of Random Reals
 Ann. Pure Appl. Logic
, 2003
"... We investigate the initial segment complexity of random reals. Let K(... ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We investigate the initial segment complexity of random reals. Let K(...
Feasible Reductions to KolmogorovLoveland Stochastic Sequences
 THEOR. COMPUT. SCI
, 1999
"... For every binary sequence A, there is an infinite binary sequence S such that A P tt S and S is stochastic in the sense of Kolmogorov and Loveland. ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
For every binary sequence A, there is an infinite binary sequence S such that A P tt S and S is stochastic in the sense of Kolmogorov and Loveland.
Resource Bounded Randomness and Computational Complexity
 Theoretical Computer Science
, 1997
"... We give a survey of resource bounded randomness concepts and show their relations to each other. Moreover, we introduce several new resource bounded randomness concepts corresponding to the classical randomness concepts. We show that the notion of polynomial time bounded Ko randomness is independent ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We give a survey of resource bounded randomness concepts and show their relations to each other. Moreover, we introduce several new resource bounded randomness concepts corresponding to the classical randomness concepts. We show that the notion of polynomial time bounded Ko randomness is independent of the notions of polynomial time bounded Lutz, Schnorr and Kurtz randomness. Lutz has conjectured that, for a given time or space bound, the corresponding resource bounded Lutz randomness is a proper refinement of resource bounded Schnorr randomness. We answer this conjecture for the case of polynomial time bound in this paper. Moreover, we show that polynomial time bounded Schnorr randomness is a proper refinement of polynomial time bounded Kurtz randomness too. In contrast to this result, however, we also show that the notions of polynomial time bounded Lutz, Schnorr and Kurtz randomness coincide in the case of recursive sets, whence it suffices to study the notion of resource bounded Lu...
Computability and randomness: Five questions
"... 1 How were you initially drawn to the study of computation and randomness? My first contact with the area was in 1996 when I still worked at the University of Chicago. Back then, my main interest was in structures from computability theory, such as the Turing degrees of computably enumerable sets. I ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
1 How were you initially drawn to the study of computation and randomness? My first contact with the area was in 1996 when I still worked at the University of Chicago. Back then, my main interest was in structures from computability theory, such as the Turing degrees of computably enumerable sets. I analyzed them via coding with firstorder formulas. During a visit to New Zealand, Cris Calude in Auckland introduced me to algorithmic information theory, a subject on which he had just finished a book [3]. We wrote a paper [4] showing that a set truthtable above the halting problem is not MartinLöf random (in fact the proof showed that it is not even weakly random [33, 4.3.9]). I also learned about Solovay reducibility, which is a way to gauge the relative randomness of real numbers with a computably enumerable left cut. These topics, and many more, were studied either in Chaitin’s work [6] or in Solovay’s visionary, but never published, manuscript [35], of which Cris possessed a copy. l In April 2000 I returned to New Zealand. I worked with Rod Downey and Denis Hirschfeldt on the Solovay degrees of real numbers with computably enumerable left cut. We proved that this degree structure is dense, and that the top degree, the degree of Chaitin’s Ω, cannot be split into two lesser degrees [9]. During this visit I learned about Ktriviality, a notion formalizing the intuitive idea of a set of natural numbers that is far from random. To understand Ktriviality, we first need a bit of background. Sets of natural numbers (simply called sets below) are a main topic of study in computability theory. Sets can be “identified ” with infinite sequences of bits. Given a set A, the bit in position n has value 1 if n is in A, otherwise its value is 0. A string is a finite sequence of bits, such as 11001110110. Let K(x) denote the length of a shortest prefixfree description of a string x (sometimes called the prefixfree Kolmogorov complexity of x even though Kolmogorov didn’t introduce it). We say that K(x) is the prefixfree complexity of x. Chaitin [6] defined a set A ⊆ N to be Ktrivial if each initial segment of A has prefixfree complexity no greater than the prefixfree complexity of its length. That is, there is b ∈ N such that, for each n,