Results 1  10
of
21
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 320 (19 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
Trivial Reals
"... Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefixfree Kolmogorov complexity. Such Htrivial reals are interesting due to the connection between algorithmic complexity and effective randomness. We give a new, easier construction of an Htrivi ..."
Abstract

Cited by 57 (31 self)
 Add to MetaCart
Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefixfree Kolmogorov complexity. Such Htrivial reals are interesting due to the connection between algorithmic complexity and effective randomness. We give a new, easier construction of an Htrivial real. We also analyze various computabilitytheoretic properties of the Htrivial reals, showing for example that no Htrivial real can compute the halting problem. Therefore, our construction of an Htrivial computably enumerable set is an easy, injuryfree construction of an incomplete computably enumerable set. Finally, we relate the Htrivials to other classes of "highly nonrandom " reals that have been previously studied.
Discovering Neural Nets With Low Kolmogorov Complexity And High Generalization Capability
 Neural Networks
, 1997
"... Many neural net learning algorithms aim at finding "simple" nets to explain training data. The expectation is: the "simpler" the networks, the better the generalization on test data (! Occam's razor). Previous implementations, however, use measures for "simplicity" that lack the power, universali ..."
Abstract

Cited by 50 (31 self)
 Add to MetaCart
Many neural net learning algorithms aim at finding "simple" nets to explain training data. The expectation is: the "simpler" the networks, the better the generalization on test data (! Occam's razor). Previous implementations, however, use measures for "simplicity" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the "Bayesian" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the SolomonoffLevin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity (a timebounded generalization of Kolmogorov comple...
Algorithmic Complexity and Stochastic Properties of Finite Binary Sequences
, 1999
"... This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexit ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexity, A. N. Kolmogorov's leading motive was developing on its basis a mathematical theory more adequately substantiating applications of probability theory, mathematical statistics and information theory. Kolmogorov wanted to deduce properties of a random object from its complexity characteristics without use of the notion of probability. In the first part of this paper we present several results in this direction. Though the subsequent development of algorithmic complexity and randomness was different, algorithmic complexity has successful applications in a traditional probabilistic framework. In the second part of the paper we consider applications to the estimation of parameters and the definition of Bernoulli sequences. All considerations have finite combinatorial character. 1.
Discovering Problem Solutions with Low Kolmogorov Complexity and High Generalization Capability
 MACHINE LEARNING: PROCEEDINGS OF THE TWELFTH INTERNATIONAL CONFERENCE
, 1994
"... Many machine learning algorithms aim at finding "simple" rules to explain training data. The expectation is: the "simpler" the rules, the better the generalization on test data (! Occam's razor). Most practical implementations, however, use measures for "simplicity" that lack the power, universality ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
Many machine learning algorithms aim at finding "simple" rules to explain training data. The expectation is: the "simpler" the rules, the better the generalization on test data (! Occam's razor). Most practical implementations, however, use measures for "simplicity" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the "Bayesian" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the SolomonoffLevin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity (a timebounded generalization of Kolmogorov complexity) and...
Randomness, computability, and density
 SIAM Journal of Computation
, 2002
"... 1 Introduction In this paper we are concerned with effectively generated reals in the interval (0; 1] and their relative randomness. In what follows, real and rational will mean positive real and positive rational, respectively. It will be convenient to work modulo 1, that is, identifying n + ff and ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
1 Introduction In this paper we are concerned with effectively generated reals in the interval (0; 1] and their relative randomness. In what follows, real and rational will mean positive real and positive rational, respectively. It will be convenient to work modulo 1, that is, identifying n + ff and ff for any n 2! and ff 2 (0; 1], and we do this below without further comment.
Computational depth and reducibility
 Theoretical Computer Science
, 1994
"... This paper reviews and investigates Bennett's notions of strong and weak computational depth (also called logical depth) for in nite binary sequences. Roughly, an in nite binary sequence x is de ned to be weakly useful if every element of a nonnegligible set of decidable sequences is reducible to x ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This paper reviews and investigates Bennett's notions of strong and weak computational depth (also called logical depth) for in nite binary sequences. Roughly, an in nite binary sequence x is de ned to be weakly useful if every element of a nonnegligible set of decidable sequences is reducible to x in recursively bounded time. It is shown that every weakly useful sequence is strongly deep. This result (which generalizes Bennett's observation that the halting problem is strongly deep) implies that every high Turing degree contains strongly deep sequences. It is also shown that, in the sense of Baire category, almost
Relations between varieties of Kolmogorov complexity
 Mathematical Systems Theory
, 1996
"... Abstract. There are several sorts of Kolmogorov complexity, better to say several Kolmogorov complexities: decision complexity, simple complexity, prefix complexity, monotonic complexity, a priori complexity. The last three can and the first two cannot be used for defining randomness of an infinite ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract. There are several sorts of Kolmogorov complexity, better to say several Kolmogorov complexities: decision complexity, simple complexity, prefix complexity, monotonic complexity, a priori complexity. The last three can and the first two cannot be used for defining randomness of an infinite binary sequence. All those five versions of Kolmogorov complexity were considered, from a unified point of view, in a paper by the first author which appeared in Watanabe’s book [23]. Upper and lower bounds for those complexities and also for their differences were announced in that paper without proofs. (Some of those bounds are mentioned in Section 4.4.5 of [16].) The purpose of this paper (which can be read independently of [23]) is to give proofs for the bounds from [23]. The terminology used in this paper is somehow nonstandard: we call “Kolmogorov entropy ” what is usually called “Kolmogorov complexity. ” This is a Moscow tradition suggested by Kolmogorov himself. By this tradition the term “complexity ” relates to any mode of description and “entropy ” is the complexity related to an optimal mode (i.e., to a mode that, roughly speaking, gives the shortest descriptions).
Complexity Approximation Principle
 Computer Journal
, 1999
"... INTRODUCTION The subject of this note is another inductive principle, which can be regarded as a direct generalization of the minimum description length (MDL) and minimum message length (MML) principles. We will describe the work started at the Computer Learning Research Centre (Royal Holloway, Uni ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
INTRODUCTION The subject of this note is another inductive principle, which can be regarded as a direct generalization of the minimum description length (MDL) and minimum message length (MML) principles. We will describe the work started at the Computer Learning Research Centre (Royal Holloway, University of London) related to this new principle, which we call the complexity approximation principle (CAP). Both MDL and MML principles can be interpreted as Kolmogorov complexity approximation principles (as explained in Rissanen [1, 2] and Wallace and Freeman [3]; see also [4]). It is shown in [5] and [6] that it is possible to generalize Kolmogorov complexity to describe the optimal performance in different `games of prediction'. Using this general notion, called predictive complexity,itis straightforward to extend the MDL and MML principles to our more general CAP. In Section 2 we define predictive complexity, in Section 3 several examples are given and in Section 4