Results 1  10
of
100
Bounded Queries to SAT and the Boolean Hierarchy
 Theoretical Computer Science
, 1991
"... We study the complexity of decision problems that can be solved by a polynomialtime Turing machine that makes a bounded number of queries to an NP oracle. Depending on whether we allow some queries to depend on the results of other queries, we obtain two (probably) different hierarchies. We present ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
We study the complexity of decision problems that can be solved by a polynomialtime Turing machine that makes a bounded number of queries to an NP oracle. Depending on whether we allow some queries to depend on the results of other queries, we obtain two (probably) different hierarchies. We present several results relating the bounded NP query hierarchies to each other and to the Boolean hierarchy. We also consider the similarlydefined hierarchies of functions that can be computed by a polynomialtime Turing machine that makes a bounded number of queries to an NP oracle. We present relations among these two hierarchies and the Boolean hierarchy. In particular we show for all k that there are functions computable with 2 k parallel queries to an NP set that are not computable in polynomial time with k serial queries to any oracle, unless P = NP. As a corollary k + 1 parallel queries to an NP set allow us to compute more functions than are computable with only k parallel queries to a...
Hierarchies Of Generalized Kolmogorov Complexities And Nonenumerable Universal Measures Computable In The Limit
 INTERNATIONAL JOURNAL OF FOUNDATIONS OF COMPUTER SCIENCE
, 2000
"... The traditional theory of Kolmogorov complexity and algorithmic probability focuses on monotone Turing machines with oneway writeonly output tape. This naturally leads to the universal enumerable SolomonoLevin measure. Here we introduce more general, nonenumerable but cumulatively enumerable m ..."
Abstract

Cited by 38 (20 self)
 Add to MetaCart
The traditional theory of Kolmogorov complexity and algorithmic probability focuses on monotone Turing machines with oneway writeonly output tape. This naturally leads to the universal enumerable SolomonoLevin measure. Here we introduce more general, nonenumerable but cumulatively enumerable measures (CEMs) derived from Turing machines with lexicographically nondecreasing output and random input, and even more general approximable measures and distributions computable in the limit. We obtain a natural hierarchy of generalizations of algorithmic probability and Kolmogorov complexity, suggesting that the "true" information content of some (possibly in nite) bitstring x is the size of the shortest nonhalting program that converges to x and nothing but x on a Turing machine that can edit its previous outputs. Among other things we show that there are objects computable in the limit yet more random than Chaitin's "number of wisdom" Omega, that any approximable measure of x is small for any x lacking a short description, that there is no universal approximable distribution, that there is a universal CEM, and that any nonenumerable CEM of x is small for any x lacking a short enumerating program. We briey mention consequences for universes sampled from such priors.
Algorithmic Theories Of Everything
, 2000
"... The probability distribution P from which the history of our universe is sampled represents a theory of everything or TOE. We assume P is formally describable. Since most (uncountably many) distributions are not, this imposes a strong inductive bias. We show that P(x) is small for any universe x lac ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
The probability distribution P from which the history of our universe is sampled represents a theory of everything or TOE. We assume P is formally describable. Since most (uncountably many) distributions are not, this imposes a strong inductive bias. We show that P(x) is small for any universe x lacking a short description, and study the spectrum of TOEs spanned by two Ps, one reflecting the most compact constructive descriptions, the other the fastest way of computing everything. The former derives from generalizations of traditional computability, Solomonoff’s algorithmic probability, Kolmogorov complexity, and objects more random than Chaitin’s Omega, the latter from Levin’s universal search and a natural resourceoriented postulate: the cumulative prior probability of all x incomputable within time t by this optimal algorithm should be 1/t. Between both Ps we find a universal cumulatively enumerable measure that dominates traditional enumerable measures; any such CEM must assign low probability to any universe lacking a short enumerating program. We derive Pspecific consequences for evolving observers, inductive reasoning, quantum physics, philosophy, and the expected duration of our universe.
Accelerated Turing Machines
 Minds and Machines
, 2002
"... Abstract. Accelerating Turing machines are Turing machines of a sort able to perform tasks that are commonly regarded as impossible for Turing machines. For example, they can determine whether or not the decimal representation of π contains n consecutive 7s, for any n; solve the Turingmachine halti ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Abstract. Accelerating Turing machines are Turing machines of a sort able to perform tasks that are commonly regarded as impossible for Turing machines. For example, they can determine whether or not the decimal representation of π contains n consecutive 7s, for any n; solve the Turingmachine halting problem; and decide the predicate calculus. Are accelerating Turing machines, then, logically impossible devices? I argue that they are not. There are implications concerning the nature of effective procedures and the theoretical limits of computability. Contrary to a recent paper by Bringsjord, Bello and Ferrucci, however, the concept of an accelerating Turing machine cannot be used to shove up Searle’s Chinese room argument.
A NATURAL AXIOMATIZATION OF COMPUTABILITY AND PROOF OF CHURCH’S THESIS
"... Abstract. Church’s Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turingcomputable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally e ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
Abstract. Church’s Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turingcomputable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally equivalent to an abstract state machine. This theorem presupposes three natural postulates about algorithmic computation. Here, we show that augmenting those postulates with an additional requirement regarding basic operations gives a natural axiomatization of computability and a proof of Church’s Thesis, as Gödel and others suggested may be possible. In a similar way, but with a different set of basic operations, one can prove Turing’s Thesis, characterizing the effective string functions, and—in particular—the effectivelycomputable functions on string representations of numbers.
The theory of the degrees below 0
 J. London Math. Soc
, 1981
"... Degree theory, that is the study of the structure of the Turing degrees (or degrees of unsolvability) has been divided by Simpson [24; §5] into two parts—global and local. By the global theory he means the study of general structural properties of 3d— the degrees as a partially ordered set or uppers ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Degree theory, that is the study of the structure of the Turing degrees (or degrees of unsolvability) has been divided by Simpson [24; §5] into two parts—global and local. By the global theory he means the study of general structural properties of 3d— the degrees as a partially ordered set or uppersemilattice. The local theory concerns
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
Thinking May Be More Than Computing
 Cognition
, 1986
"... The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machin ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machines we call ‘computers’. In this paper, I want to suggest some ways that this might be done, using ideas from the mathematical theory of uncomputability (or Recursion Theory). And I want to suggest some uses to which the resulting models might be put. (The reader more interested in the models and their uses than the mathematics and its theorems, might want to skim or skip the mathematical parts.) 1.
Efficient convergence implies Ockham’s Razor
 Proceedings of the 2002 International Workshop on Computational Models of Scientific Reasoning and Applications, Las Vegas
, 2002
"... A finite data set is consistent with infinitely many alternative theories. Scientific realists recommend that we prefer the simplest one. Antirealists ask how a fixed simplicity bias could track the truth when the truth might be complex. It is no solution to impose a prior probability distribution ..."
Abstract

Cited by 18 (15 self)
 Add to MetaCart
A finite data set is consistent with infinitely many alternative theories. Scientific realists recommend that we prefer the simplest one. Antirealists ask how a fixed simplicity bias could track the truth when the truth might be complex. It is no solution to impose a prior probability distribution biased toward simplicity, for such a distribution merely embodies the bias at issue without explaining its efficacy. In this note, I argue, on the basis of computational learning theory, that a fixed simplicity bias is necessary if inquiry is to converge to the right answer efficiently, whatever the right answer might be. Efficiency is understood in the sense of minimizing the least fixed bound on retractions or errors prior to convergence. Keywords: learning, induction, simplicity, Ockham’s razor, realism, skepticism 1
The New AI: General & Sound & Relevant for Physics
 ARTIFICIAL GENERAL INTELLIGENCE (ACCEPTED 2002)
, 2003
"... Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal and practically feasible algorithms for prediction, search, induct ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal and practically feasible algorithms for prediction, search, inductive inference based on Occam’s razor, problem solving, decision making, and reinforcement learning in environments of a very general type. Since inductive inference is at the heart of all inductive sciences, some of the results are relevant not only for AI and computer science but also for physics, provoking nontraditional predictions based on Zuse’s thesis of the computergenerated universe.