Results 1  10
of
25
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 333 (16 self)
 Add to MetaCart
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 320 (19 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
Gödel's Theorem and Information
, 1982
"... Gödel's theorem may be demonstrated using arguments having an informationtheoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the tr ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
Gödel's theorem may be demonstrated using arguments having an informationtheoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the traditional proof based on the paradox of the liar, this new viewpoint suggests that the incompleteness phenomenon discovered by Gödel is natural and widespread rather than pathological and unusual.
The Application Of Algorithmic Probability to Problems in Artificial Intelligence
 in Uncertainty in Artificial Intelligence, Kanal, L.N. and Lemmer, J.F. (Eds), Elsevier Science Publishers B.V
, 1986
"... INTRODUCTION We will cover two topics First, Algorithmic Probability  the motivation for defining it, how it overcomes di#culties in other formulations of probability, some of its characteristic properties and successful applications. Second, we will apply it to problems in A.I.  where it p ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
INTRODUCTION We will cover two topics First, Algorithmic Probability  the motivation for defining it, how it overcomes di#culties in other formulations of probability, some of its characteristic properties and successful applications. Second, we will apply it to problems in A.I.  where it promises to give near optimum search procedures for two very broad classes of problems. A strong motivation for revising classical concepts of probability has come from the analysis of human problem solving. When working on a di#cult problem, a person is in a maze in which he must make choices of possible courses of action. If the problem is a familiar one, the choices will all be easy. If it is not familiar, there can be much uncertainty in each choice, but choices must somehow be made. One basis for choice might be the probability of each choice leading to a quick solution  this probability being based on experience in this problem and in problems like it. A good reason for using proba
A Highly Random Number
, 2001
"... In his celebrated 1936 paper Turing defined a machine to be circular iff it performs an infinite computation outputting only finitely many symbols. We define ( as the probability that an arbitrary machine be circular and we prove that is a random number that goes beyond $2, the probability that ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
In his celebrated 1936 paper Turing defined a machine to be circular iff it performs an infinite computation outputting only finitely many symbols. We define ( as the probability that an arbitrary machine be circular and we prove that is a random number that goes beyond $2, the probability that a universal self alelimiting machine halts. The algorithmic complexity of c is strictly greater than that of $2, but similar to the algorithmic complexity of 2 , the halting probability of an oracle machine. What makes ( interesting is that it is an example of a highly random number definable without considering oracles.
An informationtheoretic primer on complexity, selforganisation and emergence
 ADVANCES IN COMPLEX SYSTEMS IN PRESS. URL HTTP: //WWW.WORLDSCINET.COM/ACS/EDITORIAL/PAPER/5183631.PDF
, 2007
"... Complex Systems Science aims to understand concepts like complexity, selforganization, emergence and adaptation, among others. The inherent fuzziness in complex systems definitions is complicated by the unclear relation among these central processes: does selforganisation emerge or does it set the ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Complex Systems Science aims to understand concepts like complexity, selforganization, emergence and adaptation, among others. The inherent fuzziness in complex systems definitions is complicated by the unclear relation among these central processes: does selforganisation emerge or does it set the preconditions for emergence? Does complexity arise by adaptation or is complexity necessary for adaptation to arise? The inevitable consequence of the current impasse is miscommunication among scientists within and across disciplines. We propose a set of concepts, together with their informationtheoretic interpretations, which can be used as a dictionary of Complex Systems Science discourse. Our hope is that the suggested informationtheoretic baseline may facilitate consistent communications among practitioners, and provide new insights into the field.
ON INTERPRETING CHAITIN’S INCOMPLETENESS THEOREM
, 1998
"... The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin’s famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin’s famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental.
Is Complexity a Source of Incompleteness?
 IS COMPLEXITY A SOURCE OF INCOMPLETENESS
, 2004
"... ..."
Informationtheoretic Incompleteness
 APPLIED MATHEMATICS AND COMPUTATION
, 1992
"... We propose an improved definition of the complexity of a formal axiomatic system: this is now taken to be the minimum size of a selfdelimiting program for enumerating the set of theorems of the formal system. Using this new definition, we show (a) that no formal system of complexity n can exhibit a ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We propose an improved definition of the complexity of a formal axiomatic system: this is now taken to be the minimum size of a selfdelimiting program for enumerating the set of theorems of the formal system. Using this new definition, we show (a) that no formal system of complexity n can exhibit a specific object with complexity greater than n + c, and (b) that a formal system of complexity n can determine at most n + c scattered bits of the halting probability\Omega . We also present a short, selfcontained proof of (b).