Results 1  10
of
10
Recursively Enumerable Reals and Chaitin Ω Numbers
"... A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from b ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from below. We shall study this relation and characterize it in terms of relations between r.e. sets. Solovay's [23]like numbers are the maximal r.e. real numbers with respect to this order. They are random r.e. real numbers. The halting probability ofa universal selfdelimiting Turing machine (Chaitin's Ω number, [9]) is also a random r.e. real. Solovay showed that any Chaitin Ω number islike. In this paper we show that the converse implication is true as well: any Ωlike real in the unit interval is the halting probability of a universal selfdelimiting Turing machine.
Recursive computational depth
 Information and Computation
, 1999
"... In the 1980's, Bennett introduced computational depth as a formal measure of the amount of computational history that is evident in an object's structure. In particular, Bennett identi ed the classes of weakly deep and strongly deep sequences, and showed that the halting problem is strongl ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
In the 1980's, Bennett introduced computational depth as a formal measure of the amount of computational history that is evident in an object's structure. In particular, Bennett identi ed the classes of weakly deep and strongly deep sequences, and showed that the halting problem is strongly deep. Juedes, Lathrop, and Lutz subsequently extended this result by de ning the class of weakly useful sequences, and proving that every weakly useful sequence is strongly deep. The present paper investigates re nements of Bennett's notions of weak and strong depth, called recursively weak depth (introduced by Fenner, Lutz and Mayordomo) and recursively strong depth (introduced here). It is argued that these re nements naturally capture Bennett's idea that deep objects are those which \contain internal evidence of a nontrivial causal history. &quot; The fundamental properties of recursive computational depth are developed, and it is shown that the recursively weakly (respectively, strongly) deep sequences form a proper subclass of the class of weakly (respectively, strongly) deep sequences. The abovementioned theorem of Juedes, Lathrop, and Lutz is then strengthened by proving that every weakly useful sequence is recursively strongly deep. It follows from these results that not every strongly deep sequence is weakly useful, thereby answering a question posed by Juedes.
Circuit size relative to pseudorandom oracles, Theoretical Computer Science A 107
, 1993
"... Circuitsize complexity is compared with deterministic and nondeterministic time complexity in the presence of pseudorandom oracles. The following separations are shown to hold relative to every pspacerandom oracle A, and relative toalmost every oracle A 2 ESPACE. (i) NP A is not contained in SIZE ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Circuitsize complexity is compared with deterministic and nondeterministic time complexity in the presence of pseudorandom oracles. The following separations are shown to hold relative to every pspacerandom oracle A, and relative toalmost every oracle A 2 ESPACE. (i) NP A is not contained in SIZE A (2 n)foranyreal < 1 3. (ii) E A is not contained in SIZE A ( 2n n). Thus, neither NP A nor E A is contained in P A /Poly. In fact, these separations are shown to hold for almost every n. Since a randomly selected oracle is pspacerandom with probability one, (i) and (ii) immediately imply the corresponding random oracle separations, thus improving a result of Bennett and Gill [9] and answering open questions of Wilson [47]. 1
Computational depth and reducibility
 Theoretical Computer Science
, 1994
"... This paper reviews and investigates Bennett's notions of strong and weak computational depth (also called logical depth) for in nite binary sequences. Roughly, an in nite binary sequence x is de ned to be weakly useful if every element of a nonnegligible set of decidable sequences is reducible ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
This paper reviews and investigates Bennett's notions of strong and weak computational depth (also called logical depth) for in nite binary sequences. Roughly, an in nite binary sequence x is de ned to be weakly useful if every element of a nonnegligible set of decidable sequences is reducible to x in recursively bounded time. It is shown that every weakly useful sequence is strongly deep. This result (which generalizes Bennett's observation that the halting problem is strongly deep) implies that every high Turing degree contains strongly deep sequences. It is also shown that, in the sense of Baire category, almost
A Glimpse into Algorithmic Information Theory
 LOGIC, LANGUAGE AND COMPUTATION, VOLUME 3, CSLI SERIES
, 1999
"... This paper is a subjective, short overview of algorithmic information theory. We critically discuss various equivalent algorithmical models of randomness motivating a "randomness hypothesis". Finally some recent results on computably enumerable random reals are reviewed. ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
(Show Context)
This paper is a subjective, short overview of algorithmic information theory. We critically discuss various equivalent algorithmical models of randomness motivating a "randomness hypothesis". Finally some recent results on computably enumerable random reals are reviewed.
A Goodness Measure for Phrase Learning via Compression with the MDL Principle
, 1998
"... This paper reports our ongoing research on unsupervised language learning via compression within the MDL paradigm. It formulates an empirical informationtheoretical measure, description length gain, for evaluating the goodness of guessing a sequence of words (or character) as a phrase (or a word), ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper reports our ongoing research on unsupervised language learning via compression within the MDL paradigm. It formulates an empirical informationtheoretical measure, description length gain, for evaluating the goodness of guessing a sequence of words (or character) as a phrase (or a word), which can be calculated easily following classic information theory. The paper also presents a bestfirst learning algorithm based on this measure. Experiments on phrase and lexical learning from POS tag and character sequence, respectively, show promising results.
Compression Depth and the Behavior of Cellular Automata
 Complex Systems
, 1997
"... A computable complexity measure analogous to computational depth is developed using the LempelZiv compression algorithm. This complexity measure, which we call compression depth, is then applied to the computational output of cellular automata. We find that compression depth captures the complexity ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
A computable complexity measure analogous to computational depth is developed using the LempelZiv compression algorithm. This complexity measure, which we call compression depth, is then applied to the computational output of cellular automata. We find that compression depth captures the complexity found in Wolfram Class III celluar automata, and is in good agreement with his classification scheme. We further investigate the rule space of cellular automata using Langton's parameter. 1 This research was supported in part by National Science Foundation Grant CCR9157382, with matching funds from Rockwell, Microware Systems Corporation, and Amoco Foundation. 1 Introduction Measures of the complexities of objects are widely used in both theory and applications in order to model, predict, and classify objects. Information theory gives us several methods for measuring the information content of objects. The most widely used of these information measures, entropy and algorithmic informa...
A Goodness Measure for Phrase Structure Learning via Compression with the MDL Principle
, 1998
"... ..."
Hard Instances of Hard Problems 1
"... This paper investigates the instance complexities of problems that are hard or weakly hard for exponential time under polynomial time, manyone reductions. It is shown that almost every instance of almost every problem in exponential time has essentially maximal instance complexity. It follows that ..."
Abstract
 Add to MetaCart
(Show Context)
This paper investigates the instance complexities of problems that are hard or weakly hard for exponential time under polynomial time, manyone reductions. It is shown that almost every instance of almost every problem in exponential time has essentially maximal instance complexity. It follows that every weakly hard problem has a dense set of such maximally hard instances. This extends the theorem, due to Orponen, Ko, Schoning and Watanabe (1994), that every hard problem for exponential time has a dense set of maximally hard instances. Complementing this, it is shown that every hard problem for exponential time also has a dense set of unusually easy instances. 1