Results 1  10
of
45
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 330 (16 self)
 Add to MetaCart
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 321 (19 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
The Application Of Algorithmic Probability to Problems in Artificial Intelligence
 in Uncertainty in Artificial Intelligence, Kanal, L.N. and Lemmer, J.F. (Eds), Elsevier Science Publishers B.V
, 1986
"... INTRODUCTION We will cover two topics First, Algorithmic Probability  the motivation for defining it, how it overcomes di#culties in other formulations of probability, some of its characteristic properties and successful applications. Second, we will apply it to problems in A.I.  where it p ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
INTRODUCTION We will cover two topics First, Algorithmic Probability  the motivation for defining it, how it overcomes di#culties in other formulations of probability, some of its characteristic properties and successful applications. Second, we will apply it to problems in A.I.  where it promises to give near optimum search procedures for two very broad classes of problems. A strong motivation for revising classical concepts of probability has come from the analysis of human problem solving. When working on a di#cult problem, a person is in a maze in which he must make choices of possible courses of action. If the problem is a familiar one, the choices will all be easy. If it is not familiar, there can be much uncertainty in each choice, but choices must somehow be made. One basis for choice might be the probability of each choice leading to a quick solution  this probability being based on experience in this problem and in problems like it. A good reason for using proba
A Highly Random Number
, 2001
"... In his celebrated 1936 paper Turing defined a machine to be circular iff it performs an infinite computation outputting only finitely many symbols. We define ( as the probability that an arbitrary machine be circular and we prove that is a random number that goes beyond $2, the probability that ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
In his celebrated 1936 paper Turing defined a machine to be circular iff it performs an infinite computation outputting only finitely many symbols. We define ( as the probability that an arbitrary machine be circular and we prove that is a random number that goes beyond $2, the probability that a universal self alelimiting machine halts. The algorithmic complexity of c is strictly greater than that of $2, but similar to the algorithmic complexity of 2 , the halting probability of an oracle machine. What makes ( interesting is that it is an example of a highly random number definable without considering oracles.
An informationtheoretic primer on complexity, selforganisation and emergence
 ADVANCES IN COMPLEX SYSTEMS IN PRESS. URL HTTP: //WWW.WORLDSCINET.COM/ACS/EDITORIAL/PAPER/5183631.PDF
, 2007
"... Complex Systems Science aims to understand concepts like complexity, selforganization, emergence and adaptation, among others. The inherent fuzziness in complex systems definitions is complicated by the unclear relation among these central processes: does selforganisation emerge or does it set the ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Complex Systems Science aims to understand concepts like complexity, selforganization, emergence and adaptation, among others. The inherent fuzziness in complex systems definitions is complicated by the unclear relation among these central processes: does selforganisation emerge or does it set the preconditions for emergence? Does complexity arise by adaptation or is complexity necessary for adaptation to arise? The inevitable consequence of the current impasse is miscommunication among scientists within and across disciplines. We propose a set of concepts, together with their informationtheoretic interpretations, which can be used as a dictionary of Complex Systems Science discourse. Our hope is that the suggested informationtheoretic baseline may facilitate consistent communications among practitioners, and provide new insights into the field.
Is Complexity a Source of Incompleteness?
 IS COMPLEXITY A SOURCE OF INCOMPLETENESS
, 2004
"... ..."
ON INTERPRETING CHAITIN’S INCOMPLETENESS THEOREM
, 1998
"... The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin’s famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin’s famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental.
A Note On Monte Carlo Primality Tests And Algorithmic Information Theory
, 1978
"... Solovay and Strassen, and Miller and Rabin have discovered fast algorithms for testing primality which use coinflipping and whose conclusions are only probably correct. On the other hand, algorithmic information theory provides a precise mathematical definition of the notion of random or patternles ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Solovay and Strassen, and Miller and Rabin have discovered fast algorithms for testing primality which use coinflipping and whose conclusions are only probably correct. On the other hand, algorithmic information theory provides a precise mathematical definition of the notion of random or patternless sequence. In this paper we shall describe conditions under which if the sequence of coin tosses in the SolovayStrassen and MillerRabin algorithms is replaced by a sequence of heads and tails that is of maximal algorithmic information content, i.e., has maximal algorithmic randomness, then one obtains an errorfree test for primality. These results are only of theoretical interest, since it is a manifestation of the Gödel incompleteness phenomenon that it is impossible to "certify" a sequence to be random by means of a proof, even though most sequences have this property. Thus by using certified random sequences one can in principle, but not in practice, convert proba...