Results 1  10
of
46
On the Length of Programs for Computing Finite Binary Sequences
 Journal of the ACM
, 1966
"... The use of Turing machines for calculating finite binary sequences is studied from the point of view of information theory and the theory of recursive functions. Various results are obtained concerning the number of instructions in programs. A modified form of Turing machine is studied from the same ..."
Abstract

Cited by 226 (7 self)
 Add to MetaCart
The use of Turing machines for calculating finite binary sequences is studied from the point of view of information theory and the theory of recursive functions. Various results are obtained concerning the number of instructions in programs. A modified form of Turing machine is studied from the same point of view. An application to the problem of defining a patternless sequence is proposed in terms of the concepts here 2 G. J. Chaitin developed. Introduction In this paper the Turing machine is regarded as a general purpose computer and some practical questions are asked about programming it. Given an arbitrary finite binary sequence, what is the length of the shortest program for calculating it? What are the properties of those binary sequences of a given length which require the longest programs? Do most of the binary sequences of a given length require programs of about the same length? The questions posed above are answered in Part 1. In the course of answering them, the logical ...
The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms
 Russian Math. Surveys
, 1970
"... In 1964 Kolmogorov introduced the concept of the complexity of a finite object (for instance, the words in a certain alphabet). He defined complexity as the minimum number of binary signs containing all the information about a given object that are sufficient for its recovery (decoding). This defini ..."
Abstract

Cited by 189 (1 self)
 Add to MetaCart
In 1964 Kolmogorov introduced the concept of the complexity of a finite object (for instance, the words in a certain alphabet). He defined complexity as the minimum number of binary signs containing all the information about a given object that are sufficient for its recovery (decoding). This definition depends essentially on the method of decoding. However, by means of the general theory of algorithms, Kolmogorov was able to give an invariant (universal) definition of complexity. Related concepts were investigated by Solotionoff (U.S.A.) and Markov. Using the concept of complexity, Kolmogorov gave definitions of the quantity of information in finite objects and of the concept of a random sequence (which was then defined more precisely by MartinLof). Afterwards, this circle of questions developed rapidly. In particular, an interesting development took place of the ideas of Markov on the application of the concept of complexity to the study of quantitative questions in the theory of algorithms. The present article is a survey of the fundamental results connected with the brief remarks above.
The quantitative structure of exponential time
 Complexity theory retrospective II
, 1997
"... ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuit ..."
Abstract

Cited by 90 (13 self)
 Add to MetaCart
ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuitsize complexity, Kolmogorov complexity, natural proofs, pseudorandom generators, the density of hard languages, randomized complexity, and lowness. Possible implications for the structure of NP are also discussed. 1
Degrees of random sets
, 1991
"... An explicit recursiontheoretic definition of a random sequence or random set of natural numbers was given by MartinLöf in 1966. Other approaches leading to the notions of nrandomness and weak nrandomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of nrando ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
An explicit recursiontheoretic definition of a random sequence or random set of natural numbers was given by MartinLöf in 1966. Other approaches leading to the notions of nrandomness and weak nrandomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of nrandom and weakly nrandom sequences with an emphasis on the structure of their Turing degrees. After an introduction and summary, in Chapter II we present several equivalent definitions of nrandomness and weak nrandomness including a new definition in terms of a forcing relation analogous to the characterization of ngeneric sequences in terms of Cohen forcing. We also prove that, as conjectured by Kurtz, weak nrandomness is indeed strictly weaker than nrandomness. Chapter III is concerned with intrinsic properties of nrandom sequences. The main results are that an (n + 1)random sequence A satisfies the condition A (n) ≡T A⊕0 (n) (strengthening a result due originally to Sacks) and that nrandom sequences satisfy a number of strong independence properties, e.g., if A ⊕ B is nrandom then A is nrandom relative to B. It follows that any countable distributive lattice can be embedded
Randomness in Computability Theory
, 2000
"... We discuss some aspects of algorithmic randomness and state some open problems in this area. The first part is devoted to the question "What is a computably random sequence?" Here we survey some of the approaches to algorithmic randomness and address some questions on these concepts. In the seco ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
We discuss some aspects of algorithmic randomness and state some open problems in this area. The first part is devoted to the question "What is a computably random sequence?" Here we survey some of the approaches to algorithmic randomness and address some questions on these concepts. In the second part we look at the Turing degrees of MartinLof random sets. Finally, in the third part we deal with relativized randomness. Here we look at oracles which do not change randomness. 1980 Mathematics Subject Classification. Primary 03D80; Secondary 03D28. 1 Introduction Formalizations of the intuitive notions of computability and randomness are among the major achievements in the foundations of mathematics in the 20th century. It is commonly accepted that various equivalent formal computability notions  like Turing computability or recursiveness  which were introduced in the 1930s and 1940s adequately capture computability in the intuitive sense. This belief is expressed in the w...
ResourceBounded Balanced Genericity, Stochasticity and Weak Randomness
 In Complexity, Logic, and Recursion Theory
, 1996
"... . We introduce balanced t(n)genericity which is a refinement of the genericity concept of AmbosSpies, Fleischhack and Huwig [2] and which in addition controls the frequency with which a condition is met. We show that this concept coincides with the resourcebounded version of Church's stochasticit ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
. We introduce balanced t(n)genericity which is a refinement of the genericity concept of AmbosSpies, Fleischhack and Huwig [2] and which in addition controls the frequency with which a condition is met. We show that this concept coincides with the resourcebounded version of Church's stochasticity [6]. By uniformly describing these concepts and weaker notions of stochasticity introduced by Wilber [19] and Ko [11] in terms of prediction functions, we clarify the relations among these resourcebounded stochasticity concepts. Moreover, we give descriptions of these concepts in the framework of Lutz's resourcebounded measure theory [13] based on martingales: We show that t(n)stochasticity coincides with a weak notion of t(n)randomness based on socalled simple martingales but that it is strictly weaker than t(n)randomness in the sense of Lutz. 1 Introduction Over the last years resourcebounded versions of Baire category and Lebesgue measure have been introduced in complexity theor...
Relative to a random oracle, NP is not small
 In Proc. 9th Structures
, 1994
"... Resourcebounded measure as originated by Lutz is an extension of classical measure theory which provides a probabilistic means of describing the relative sizes of complexity classes. Lutz has proposed the hypothesis that NP does not have pmeasure zero, meaning loosely that NP contains a nonneglig ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Resourcebounded measure as originated by Lutz is an extension of classical measure theory which provides a probabilistic means of describing the relative sizes of complexity classes. Lutz has proposed the hypothesis that NP does not have pmeasure zero, meaning loosely that NP contains a nonnegligible subset of exponential time. This hypothesis implies a strong separation of P from NP and is supported by a growing body of plausible consequences which are not known to follow from the weaker assertion P ̸ = NP. It is shown in this paper that relative to a random oracle, NP does not have pmeasure zero. The proof exploits the following independence property of algorithmically random sequences: if A is an algorithmically random sequence and a subsequence A0 is chosen by means of a bounded KolmogorovLoveland
Von Mises' Definition of Random Sequences Reconsidered
, 1987
"... We review briefly the attempts to define random sequences (0). These attempts suggest two theorems: one concerning the number of subsequence selection procedures that transform a random sequence into a random sequence (13 and 5); the other concerning the relationship between definitions of rando ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We review briefly the attempts to define random sequences (0). These attempts suggest two theorems: one concerning the number of subsequence selection procedures that transform a random sequence into a random sequence (13 and 5); the other concerning the relationship between definitions of randomness based on subsequence selection and those based on statistical tests (4).