Results 1  10
of
47
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 397 (17 self)
 Add to MetaCart
(Show Context)
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 394 (20 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
InformationTheoretic Characterizations of Recursive Infinite Strings
, 1976
"... Loveland and Meyer have studied necessary and sufficient conditions for an infinite binary string x to be recursive in terms of the programsize complexity relative to n of its nbit prefixes x n . Meyer has shown that x is recursive i# K(x n /n) c, and Loveland has shown that this is false if ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Loveland and Meyer have studied necessary and sufficient conditions for an infinite binary string x to be recursive in terms of the programsize complexity relative to n of its nbit prefixes x n . Meyer has shown that x is recursive i# K(x n /n) c, and Loveland has shown that this is false if one merely stipulates that K(x n /n) c for infinitely many n. We strengthen Meyer's theorem. From the fact that there are few minimalsize programs for calculating a given result, we obtain a necessary and sufficient condition for x to be recursive in terms of the absolute programsize complexity of its prefixes: x is recursive i# K(n)+c. Again Loveland's method shows that this is no longer a sufficient condition for x to be recursive if one merely stipulates that K(x n ) K(n)+c for infinitely many n.
Trivial Reals
"... Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefixfree Kolmogorov complexity. Such Htrivial reals are interesting due to the connection between algorithmic complexity and effective randomness. We give a new, easier construction of an Htrivi ..."
Abstract

Cited by 65 (33 self)
 Add to MetaCart
Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefixfree Kolmogorov complexity. Such Htrivial reals are interesting due to the connection between algorithmic complexity and effective randomness. We give a new, easier construction of an Htrivial real. We also analyze various computabilitytheoretic properties of the Htrivial reals, showing for example that no Htrivial real can compute the halting problem. Therefore, our construction of an Htrivial computably enumerable set is an easy, injuryfree construction of an incomplete computably enumerable set. Finally, we relate the Htrivials to other classes of &quot;highly nonrandom &quot; reals that have been previously studied.
Informationtheoretic Limitations of Formal Systems
 JOURNAL OF THE ACM
, 1974
"... An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these tasks. This is applied to measuring the difficulty of proving a given set of theorems, in terms of the number of bits of axioms that are assumed, and the size of the proofs needed to deduce the theorems from the axioms.
Instance Complexity
, 1994
"... We introduce a measure for the computational complexity of individual instances of a decision problem and study some of its properties. The instance complexity of a string x with respect to a set A and time bound t, ic t (x : A), is defined as the size of the smallest specialcase program for A that ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
We introduce a measure for the computational complexity of individual instances of a decision problem and study some of its properties. The instance complexity of a string x with respect to a set A and time bound t, ic t (x : A), is defined as the size of the smallest specialcase program for A that runs in time t, decides x correctly, and makes no mistakes on other strings ("don't know" answers are permitted). We prove that a set A is in P if and only if there exist a polynomial t and a constant c such that ic t (x : A) c for all x
Kolmogorov complexity and instance complexity of recursively enumerable sets
 SIAM Journal on Computing
, 1996
"... ..."
Some ComputabilityTheoretical Aspects of Reals and Randomness
, 2001
"... We study computably enumerable reals (i.e. their left cut is computably enumerable) in terms of their spectra of representations and presentations. Then we study such objects in terms of algorithmic randomness, culminating in some recent work of the author with Hirschfeldt, Laforte, and Nies conce ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
We study computably enumerable reals (i.e. their left cut is computably enumerable) in terms of their spectra of representations and presentations. Then we study such objects in terms of algorithmic randomness, culminating in some recent work of the author with Hirschfeldt, Laforte, and Nies concerning methods of calibrating randomness.
On the Difficulty of Computations
, 1970
"... Two practical considerations concerning the use of computing machinery are the amount of information that must be given to the machine for it to perform a given task and the time it takes the machine to perform it. The size of programs and their running time are studied for mathematical models of co ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Two practical considerations concerning the use of computing machinery are the amount of information that must be given to the machine for it to perform a given task and the time it takes the machine to perform it. The size of programs and their running time are studied for mathematical models of computing machines. The study of the amount of information (i.e., number of bits) in a computer program needed for it to put out a given finite binary sequence leads to a definition of a random sequence; the random sequences of a given length are those that require the longest programs. The study of the running time of programs for computing infinite sets of natural numbers leads to an arithmetic of computers, which is a distributive lattice.