Results 1  10
of
37
On Generating Small Clause Normal Forms
, 1998
"... In this paper we focus on two powerful techniques to obtain compact clause normal forms: Renaming of formulae and refined Skolemization methods. We illustrate their effect on various examples. By an exhaustive experiment of all firstorder TPTP problems, it shows that our clause normal form tran ..."
Abstract

Cited by 93 (2 self)
 Add to MetaCart
In this paper we focus on two powerful techniques to obtain compact clause normal forms: Renaming of formulae and refined Skolemization methods. We illustrate their effect on various examples. By an exhaustive experiment of all firstorder TPTP problems, it shows that our clause normal form transformation yields fewer clauses and fewer literals than the methods known and used so far. This often allows for exponentially shorter proofs and, in some cases, it makes it even possible for a theorem prover to find a proof where it was unable to do so with more standard clause normal form transformations. 1
Algorithmic Theories Of Everything
, 2000
"... The probability distribution P from which the history of our universe is sampled represents a theory of everything or TOE. We assume P is formally describable. Since most (uncountably many) distributions are not, this imposes a strong inductive bias. We show that P(x) is small for any universe x lac ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
The probability distribution P from which the history of our universe is sampled represents a theory of everything or TOE. We assume P is formally describable. Since most (uncountably many) distributions are not, this imposes a strong inductive bias. We show that P(x) is small for any universe x lacking a short description, and study the spectrum of TOEs spanned by two Ps, one reflecting the most compact constructive descriptions, the other the fastest way of computing everything. The former derives from generalizations of traditional computability, Solomonoff’s algorithmic probability, Kolmogorov complexity, and objects more random than Chaitin’s Omega, the latter from Levin’s universal search and a natural resourceoriented postulate: the cumulative prior probability of all x incomputable within time t by this optimal algorithm should be 1/t. Between both Ps we find a universal cumulatively enumerable measure that dominates traditional enumerable measures; any such CEM must assign low probability to any universe lacking a short enumerating program. We derive Pspecific consequences for evolving observers, inductive reasoning, quantum physics, philosophy, and the expected duration of our universe.
Complexity Results for FirstOrder TwoVariable Logic with Counting
, 2000
"... Let C 2 p denote the class of first order sentences with two variables and with additional quantifiers "there exists exactly (at most, at least) i", for i p, and let C 2 be the union of C 2 p taken over all integers p. We prove that the satisfiability problem for C 2 1 sentences is NEXPTIMEcomplete ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Let C 2 p denote the class of first order sentences with two variables and with additional quantifiers "there exists exactly (at most, at least) i", for i p, and let C 2 be the union of C 2 p taken over all integers p. We prove that the satisfiability problem for C 2 1 sentences is NEXPTIMEcomplete. This strengthens the results by E. Grädel, Ph. Kolaitis and M. Vardi [15] who showed that the satisfiability problem for the first order twovariable logic L 2 is NEXPTIMEcomplete and by E. Grädel, M. Otto and E. Rosen [16] who proved the decidability of C 2 . Our result easily implies that the satisfiability problem for C 2 is in nondeterministic, doubly exponential time. It is interesting that C 2 1 is in NEXPTIME in spite of the fact, that there are sentences whose minimal (and only) models are of doubly exponential size. It is worth noticing, that by a recent result of E. Gradel, M. Otto and E. Rosen [17], extensions of twovariables logic L 2 by a week access to car...
Gödel machines: Fully selfreferential optimal universal selfimprovers
 Goertzel and C. Pennachin, Artificial General Intelligence
, 2006
"... Summary. We present the first class of mathematically rigorous, general, fully selfreferential, selfimproving, optimally efficient problem solvers. Inspired by Kurt Gödel’s celebrated selfreferential formulas (1931), such a problem solver rewrites any part of its own code as soon as it has found ..."
Abstract

Cited by 25 (12 self)
 Add to MetaCart
Summary. We present the first class of mathematically rigorous, general, fully selfreferential, selfimproving, optimally efficient problem solvers. Inspired by Kurt Gödel’s celebrated selfreferential formulas (1931), such a problem solver rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problemdependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable selfrewrite. We show that such a selfrewrite is globally optimal—no local maxima!—since the code first had to prove that it is not useful to continue the proof search for alternative selfrewrites. Unlike previous nonselfreferential methods based on hardwired proof searchers, ours not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()notation, provided the utility of such speedups is provable at all. 1
Hierarchical and modular reasoning in complex theories: The case of local theory extensions
 In Proc. 6th Int. Symp. Frontiers of Combining Systems (FroCos 2007), LNCS 4720
, 2007
"... Abstract. We present an overview of results on hierarchical and modular reasoning in complex theories. We show that for a special type of extensions of a base theory, which we call local, hierarchic reasoning is possible (i.e. proof tasks in the extension can be hierarchically reduced to proof tasks ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
Abstract. We present an overview of results on hierarchical and modular reasoning in complex theories. We show that for a special type of extensions of a base theory, which we call local, hierarchic reasoning is possible (i.e. proof tasks in the extension can be hierarchically reduced to proof tasks w.r.t. the base theory). Many theories important for computer science or mathematics fall into this class (typical examples are theories of data structures, theories of free or monotone functions, but also functions occurring in mathematical analysis). In fact, it is often necessary to consider complex extensions, in which various types of functions or data structures need to be taken into account at the same time. We show how such local theory extensions can be identified and under which conditions locality is preserved when combining theories, and we investigate possibilities of efficient modular reasoning in such theory combinations. We present several examples of application domains where local theories and local theory extensions occur in a natural way. We show, in particular, that various phenomena analyzed in the verification literature can be explained in a unified way using the notion of locality. 1
The Mathematical Development Of Set Theory  From Cantor To Cohen
 The Bulletin of Symbolic Logic
, 1996
"... This article is dedicated to Professor Burton Dreben on his coming of age. I owe him particular thanks for his careful reading and numerous suggestions for improvement. My thanks go also to Jose Ruiz and the referee for their helpful comments. Parts of this account were given at the 1995 summer meet ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
This article is dedicated to Professor Burton Dreben on his coming of age. I owe him particular thanks for his careful reading and numerous suggestions for improvement. My thanks go also to Jose Ruiz and the referee for their helpful comments. Parts of this account were given at the 1995 summer meeting of the Association for Symbolic Logic at Haifa, in the Massachusetts Institute of Technology logic seminar, and to the Paris Logic Group. The author would like to express his thanks to the various organizers, as well as his gratitude to the Hebrew University of Jerusalem for its hospitality during the preparation of this article in the autumn of 1995.
Notes on Lattice Theory
"... In the early 1890’s, Richard Dedekind was working on a revised and enlarged edition of Dirichlet’s Vorlesungen über Zahlentheorie, and asked himself the following question: Given three subgroups A, B, C of an abelian group G, how many different subgroups can you get by taking intersections and sums, ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In the early 1890’s, Richard Dedekind was working on a revised and enlarged edition of Dirichlet’s Vorlesungen über Zahlentheorie, and asked himself the following question: Given three subgroups A, B, C of an abelian group G, how many different subgroups can you get by taking intersections and sums, e.g., A + B, (A+B)∩C,