Results 1  10
of
15
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 329 (16 self)
 Add to MetaCart
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 319 (19 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
Informationtheoretic Limitations of Formal Systems
 JOURNAL OF THE ACM
, 1974
"... An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these tasks. This is applied to measuring the difficulty of proving a given set of theorems, in terms of the number of bits of axioms that are assumed, and the size of the proofs needed to deduce the theorems from the axioms.
Informationtheoretic computational complexity
 IEEE Transactions on Information Theory
, 1974
"... This paper attempts to describe, in nontechnical language, some of the concepts and methods of one school of thought regarding computational complexity. It applies the viewpoint of information theory to computers. This will first lead us to a definition of the degree of randomness of individual bina ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
This paper attempts to describe, in nontechnical language, some of the concepts and methods of one school of thought regarding computational complexity. It applies the viewpoint of information theory to computers. This will first lead us to a definition of the degree of randomness of individual binary strings, and then to an informationtheoretic version of Gödel's theorem on the limitations of the axiomatic method. Finally, we will examine in the light of these ideas the scientific method and von Neumann's views on the basic conceptual problems of biology. This field's fundamental concept is the complexity of a binary string, that is, a string of bits, of zeros and ones. The complexity of a binary string is the minimum quantity of information needed to define the string. For example, the string of length n consisting entirely of ones is of complexity approximately log 2 n, because only log 2 n bits of information are required to specify n in binary notation. However, this is rather vague. Exactly what is meant by the definition of a string? To make this idea precise a computer is used. One says that a string defines another when the first string gives instructions for constructing the second string. In other words, one string defines another when it is a
To A Mathematical Definition Of "Life"
, 1970
"... "Life" and its "evolution" are fundamental concepts that have not yet been formulated in precise mathematical terms, although some efforts in this direction have been made. We suggest a possible point of departure for a mathematical definition of "life." This definition is based on the computer and ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
"Life" and its "evolution" are fundamental concepts that have not yet been formulated in precise mathematical terms, although some efforts in this direction have been made. We suggest a possible point of departure for a mathematical definition of "life." This definition is based on the computer and is closely related to recent analyses of "inductive inference" and "randomness." A living being is a unity; it is simpler to view a living organism as a whole than as the sum of its parts. If we want to compute a complete description of the region of spacetime that is a living being, the program will be smaller in size if the calculation is done all together, than if it is done by independently calculating descriptions of parts of the region and then putting them together.
Basic Elements and Problems of Probability Theory
, 1999
"... After a brief review of ontic and epistemic descriptions, and of subjective, logical and statistical interpretations of probability, we summarize the traditional axiomatization of calculus of probability in terms of Boolean algebras and its settheoretical realization in terms of Kolmogorov probabil ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
After a brief review of ontic and epistemic descriptions, and of subjective, logical and statistical interpretations of probability, we summarize the traditional axiomatization of calculus of probability in terms of Boolean algebras and its settheoretical realization in terms of Kolmogorov probability spaces. Since the axioms of mathematical probability theory say nothing about the conceptual meaning of “randomness” one considers probability as property of the generating conditions of a process so that one can relate randomness with predictability (or retrodictability). In the measuretheoretical codification of stochastic processes genuine chance processes can be defined rigorously as socalled regular processes which do not allow a longterm prediction. We stress that stochastic processes are equivalence classes of individual point functions so that they do not refer to individual processes but only to an ensemble of statistically equivalent individual processes. Less popular but conceptually more important than statistical descriptions are individual descriptions which refer to individual chaotic processes. First, we review the individual description based on the generalized harmonic analysis by Norbert Wiener. It allows the definition of individual purely chaotic processes which can be interpreted as trajectories of regular statistical stochastic processes. Another individual description refers to algorithmic procedures which connect the intrinsic randomness of a finite sequence with the complexity of the shortest program necessary to produce the sequence. Finally, we ask why there can be laws of chance. We argue that random events fulfill the laws of chance if and only if they can be reduced to (possibly hidden) deterministic events. This mathematical result may elucidate the fact that not all nonpredictable events can be grasped by the methods of mathematical probability theory.
Clustering Using the Minimum Message Length Criterion and Simulated Annealing
 in Proceedings of the 3 rd International A.I. Workshop
"... Clustering has many uses such as the generation of taxonomies and concept formation. It is essentially a search through a model space to maximise a given criterion. The criterion aims to guide the search to find models that are suitable for a purpose. The search's aim is to efficiently and consis ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Clustering has many uses such as the generation of taxonomies and concept formation. It is essentially a search through a model space to maximise a given criterion. The criterion aims to guide the search to find models that are suitable for a purpose. The search's aim is to efficiently and consistently find the model that gives the optimal criterion value. Considerable research has occurred into the criteria to use but minimal research has studied how to best search the model space. We describe how we have used simulated annealing to search the model space to optimise the minimum message length criterion.
Iterated limiting recursion and the program minimalization problem
 Journal of the Association for Computing Machinery
, 1974
"... ABSTRACT: The general problem of finding minimal programs realizing given "program descriptions " is considered, where program descriptions may specify arbitrary program properties. The problem of finding minimal programs consistent with finite or infinite inputoutput lists is a special c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
ABSTRACT: The general problem of finding minimal programs realizing given "program descriptions " is considered, where program descriptions may specify arbitrary program properties. The problem of finding minimal programs consistent with finite or infinite inputoutput lists is a special case (for infinite inputoutput lists, this is a variant of E.M. Gold's function identification problem; another closely related problem is tne grammatical inference problem). Although most program minimization problems are not recursively solvable, they are found to be no more difficult than the problem of deciding whether any given program realizes any given description, or the problem of enumerating programs in order of nondecreasing length (whichever is harder). This result is formulated in terms of klimiting recursive predicates and functionals, defined by repeated application of Gold's limit operator. A simple consequence is that the program minimization problem is limiting recursively solvable for finite inputoutput lists and 2limiting recursively solvable for infinite inputoutput lists, with weak assumptions about the measure of program size. Gold regarded limiting function identification (more generally, "black box " identification) as a model of inductive thought. Intuitively, iterated limiting identification might be regarded as higherorder inductive inference performed collectively by an ever growing community of lowerorder inductive inference machines.
Yablo’s Paradox and Kindred Infinite Liars
"... This is a defense and extension of Stephen Yablo’s claim that selfreference is completely inessential to the liar paradox. An infinite sequence of sentences of the form “None of these subsequent sentences are true ” generates the same instability in assigning truth values. I argue Yablo’s technique ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This is a defense and extension of Stephen Yablo’s claim that selfreference is completely inessential to the liar paradox. An infinite sequence of sentences of the form “None of these subsequent sentences are true ” generates the same instability in assigning truth values. I argue Yablo’s technique of substituting infinity for selfreference applies to all socalled “selfreferential” paradoxes. A representative sample is provided which includes counterparts of the preface paradox, PseudoScotus’s validity paradox, the Knower, and other enigmas of the genre. I rebut objections that Yablo’s paradox is not a genuine liar by constructing a sequence of liars that blend into Yablo’s paradox. I rebut objections that Yablo’s liar has hidden selfreference with a distinction between attributive and referential selfreference and appeals to Gregory Chaitin’s algorithmic information theory. The paper concludes with comments on the mystique of selfreference. An infinite queue of students receives a lecture on human fallibility. Each student thinks (Q) Some of the students behind me are now thinking an untruth. As it happens, each student is thinking just one thought: (Q). Of course, their different positions in the queue ensures that each token of (Q) expresses something different. None the less each of their thoughts is paradoxical. Consider student n and his thought (Qn). If (Qn) is untrue, then all of the students to his rear are thinking truths. But their thoughts can only be true if some of their successors are thinking untruths. Contradiction. If (Qn) is true, then some subsequent student is mistakenly thinking “Some of the students behind me are now thinking an untruth”. But it has already been shown that a (Q) thought cannot be untrue. 1. Freedom from selfreference The Queue paradox is not due to selfreference. Although (Q) contains the indexicals “me ” and “now”, it could be reformulated with eternal sentences. Just have student n think “There is a student at a position greater than n who is thinking an untruth at 3.35 p.m. on February 25, 1997”. This gives each student his own sentence type to think about. No student is thinking (even inadvertently) about his own thoughts. Indeed, one can