Results 1  10
of
336
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 325 (19 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
Universal prediction of individual sequences
 IEEE Transactions on Information Theory
, 1992
"... AbstructThe problem of predicting the next outcome of an individual binary sequence using finite memory, is considered. The finitestate predictability of an infinite sequence is defined as the minimum fraction of prediction errors that can be made by any finitestate (FS) predictor. It is proved t ..."
Abstract

Cited by 158 (13 self)
 Add to MetaCart
AbstructThe problem of predicting the next outcome of an individual binary sequence using finite memory, is considered. The finitestate predictability of an infinite sequence is defined as the minimum fraction of prediction errors that can be made by any finitestate (FS) predictor. It is proved that this FS predictability can be attained by universal sequential prediction schemes. Specifically, an efficient prediction procedure based on the incremental parsing procedure of the LempelZiv data compression algorithm is shown to achieve asymptotically the FS predictability. Finally, some relations between compressibility and predictability are pointed out, and the predictability is proposed as an additional measure of the complexity of a sequence. Index TermsPredictability, compressibility, complexity, finitestate machines, Lempel Ziv algorithm.
The Dimensions of Individual Strings and Sequences
 INFORMATION AND COMPUTATION
, 2003
"... A constructive version of Hausdorff dimension is developed using constructive supergales, which are betting strategies that generalize the constructive supermartingales used in the theory of individual random sequences. This constructive dimension is used to assign every individual (infinite, binary ..."
Abstract

Cited by 93 (10 self)
 Add to MetaCart
A constructive version of Hausdorff dimension is developed using constructive supergales, which are betting strategies that generalize the constructive supermartingales used in the theory of individual random sequences. This constructive dimension is used to assign every individual (infinite, binary) sequence S a dimension, which is a real number dim(S) in the interval [0, 1]. Sequences that
Effective strong dimension in algorithmic information and computational complexity
 SIAM Journal on Computing
, 2004
"... The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff (1919), and packing dimension, developed independently by Tricot (1982) and Sullivan (1984). Both dimensions have the mathematical advantage of being defined from measures, and both have yielded exten ..."
Abstract

Cited by 78 (29 self)
 Add to MetaCart
The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff (1919), and packing dimension, developed independently by Tricot (1982) and Sullivan (1984). Both dimensions have the mathematical advantage of being defined from measures, and both have yielded extensive applications in fractal geometry and dynamical systems. Lutz (2000) has recently proven a simple characterization of Hausdorff dimension in terms of gales, which are betting strategies that generalize martingales. Imposing various computability and complexity constraints on these gales produces a spectrum of effective versions of Hausdorff dimension, including constructive, computable, polynomialspace, polynomialtime, and finitestate dimensions. Work by several investigators has already used these effective dimensions to shed significant new light on a variety of topics in theoretical computer science. In this paper we show that packing dimension can also be characterized in terms of gales. Moreover, even though the usual definition of packing dimension is considerably more complex than that of Hausdorff dimension, our gale characterization of packing dimension is an exact dual
Lowness Properties and Randomness
 ADVANCES IN MATHEMATICS
"... The set A is low for MartinLof random if each random set is already random relative to A. A is Ktrivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of AmbosSpies and Kucera [2 ..."
Abstract

Cited by 78 (21 self)
 Add to MetaCart
The set A is low for MartinLof random if each random set is already random relative to A. A is Ktrivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of AmbosSpies and Kucera [2], showing that each low for MartinLof random set is # 2 . Our class induces a natural intermediate # 3 ideal in the r.e. Turing degrees (which generates the whole class under downward closure). Answering
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
 IEEE Transactions on Information Theory
, 1998
"... The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic conditi ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's mi...
Complexity measures of supervised classification problems
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2002
"... AbstractÐWe studied a number of measures that characterize the difficulty of a classification problem, focusing on the geometrical complexity of the class boundary. We compared a set of realworld problems to random labelings of points and found that real problems contain structures in this measurem ..."
Abstract

Cited by 66 (6 self)
 Add to MetaCart
AbstractÐWe studied a number of measures that characterize the difficulty of a classification problem, focusing on the geometrical complexity of the class boundary. We compared a set of realworld problems to random labelings of points and found that real problems contain structures in this measurement space that are significantly different from the random sets. Distributions of problems in this space show that there exist at least two independent factors affecting a problem's difficulty. We suggest using this space to describe a classifier's domain of competence. This can guide static and dynamic selection of classifiers for specific problems as well as subproblems formed by confinement, projection, and transformations of the feature vectors. Index TermsÐClassification, clustering, complexity, linear separability, mixture identifiability. 1
Optimal Ordered Problem Solver
, 2002
"... We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the ..."
Abstract

Cited by 62 (20 self)
 Add to MetaCart
We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the space of domainspecific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience.
InformationTheoretic Characterizations of Recursive Infinite Strings
, 1976
"... Loveland and Meyer have studied necessary and sufficient conditions for an infinite binary string x to be recursive in terms of the programsize complexity relative to n of its nbit prefixes x n . Meyer has shown that x is recursive i# K(x n /n) c, and Loveland has shown that this is false if ..."
Abstract

Cited by 61 (5 self)
 Add to MetaCart
Loveland and Meyer have studied necessary and sufficient conditions for an infinite binary string x to be recursive in terms of the programsize complexity relative to n of its nbit prefixes x n . Meyer has shown that x is recursive i# K(x n /n) c, and Loveland has shown that this is false if one merely stipulates that K(x n /n) c for infinitely many n. We strengthen Meyer's theorem. From the fact that there are few minimalsize programs for calculating a given result, we obtain a necessary and sufficient condition for x to be recursive in terms of the absolute programsize complexity of its prefixes: x is recursive i# K(n)+c. Again Loveland's method shows that this is no longer a sufficient condition for x to be recursive if one merely stipulates that K(x n ) K(n)+c for infinitely many n.