Results 1  10
of
161
The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms
 Russian Math. Surveys
, 1970
"... In 1964 Kolmogorov introduced the concept of the complexity of a finite object (for instance, the words in a certain alphabet). He defined complexity as the minimum number of binary signs containing all the information about a given object that are sufficient for its recovery (decoding). This defini ..."
Abstract

Cited by 198 (1 self)
 Add to MetaCart
(Show Context)
In 1964 Kolmogorov introduced the concept of the complexity of a finite object (for instance, the words in a certain alphabet). He defined complexity as the minimum number of binary signs containing all the information about a given object that are sufficient for its recovery (decoding). This definition depends essentially on the method of decoding. However, by means of the general theory of algorithms, Kolmogorov was able to give an invariant (universal) definition of complexity. Related concepts were investigated by Solotionoff (U.S.A.) and Markov. Using the concept of complexity, Kolmogorov gave definitions of the quantity of information in finite objects and of the concept of a random sequence (which was then defined more precisely by MartinLof). Afterwards, this circle of questions developed rapidly. In particular, an interesting development took place of the ideas of Markov on the application of the concept of complexity to the study of quantitative questions in the theory of algorithms. The present article is a survey of the fundamental results connected with the brief remarks above.
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 62 (5 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
The History and Status of the P versus NP Question
, 1992
"... this article, I have attempted to organize and describe this literature, including an occasional opinion about the most fruitful directions, but no technical details. In the first half of this century, work on the power of formal systems led to the formalization of the notion of algorithm and the re ..."
Abstract

Cited by 52 (1 self)
 Add to MetaCart
this article, I have attempted to organize and describe this literature, including an occasional opinion about the most fruitful directions, but no technical details. In the first half of this century, work on the power of formal systems led to the formalization of the notion of algorithm and the realization that certain problems are algorithmically unsolvable. At around this time, forerunners of the programmable computing machine were beginning to appear. As mathematicians contemplated the practical capabilities and limitations of such devices, computational complexity theory emerged from the theory of algorithmic unsolvability. Early on, a particular type of computational task became evident, where one is seeking an object which lies
The Fastest And Shortest Algorithm For All WellDefined Problems
, 2002
"... An algorithm M is described that solves any welldefined problem p as quickly as the fastest algorithm computing a solution to p, save for a factor of 5 and loworder additive terms. M optimally distributes resources between the execution of provably correct psolving programs and an enumeration of ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
(Show Context)
An algorithm M is described that solves any welldefined problem p as quickly as the fastest algorithm computing a solution to p, save for a factor of 5 and loworder additive terms. M optimally distributes resources between the execution of provably correct psolving programs and an enumeration of all proofs, including relevant proofs of program correctness and of time bounds on program runtimes. M avoids Blum's speedup theorem by ignoring programs without correctness proof. M has broader applicability and can be faster than Levin's universal search, the fastest method for inverting functions save for a large multiplicative constant. An extension of Kolmogorov complexity and two novel natural measures of function complexity are used to show that the most efficient program computing some function f is also among the shortest programs provably computing f.
A Formal Definition of Intelligence Based on an Intensional Variant of Algorithmic Complexity
 In Proceedings of the International Symposium of Engineering of Intelligent Systems (EIS'98
, 1998
"... Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & H ..."
Abstract

Cited by 29 (18 self)
 Add to MetaCart
Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] with two remarkable features inherited from its objectoriented coding in C++: it is easily tunable for our needs, and it is efficient. We have made it even more reduced, removing any operand in the instruction set, even for the loop operations. We have only three registers which are AX (the accumulator), BX and CX. The operations Q b we have used for our experiment are in Table 1: LOOPTOP Decrements CX. If it is not equal to the first element jump to the program top.
Computational complexity and the existence of complexity gaps
 Dep. of
, 1969
"... ABSTRACT. Some consequences of the Blum axioms for step counting functions are investigated. Complexity classes of recursive functions are introduced analogous to the HartmanisStearns classes of recursive sequences. Arbitrarily large "gaps " are shown to occur throughout any complexity h ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Some consequences of the Blum axioms for step counting functions are investigated. Complexity classes of recursive functions are introduced analogous to the HartmanisStearns classes of recursive sequences. Arbitrarily large "gaps " are shown to occur throughout any complexity hierarchy. KEY WORDS AND PHRASES: computational complexity, measures ofcomplexity, recursive functions, tape complexity, step counting functions, axiomatic omplexity theory
Gödel machines: Fully selfreferential optimal universal selfimprovers
 Goertzel and C. Pennachin, Artificial General Intelligence
, 2006
"... Summary. We present the first class of mathematically rigorous, general, fully selfreferential, selfimproving, optimally efficient problem solvers. Inspired by Kurt Gödel’s celebrated selfreferential formulas (1931), such a problem solver rewrites any part of its own code as soon as it has found ..."
Abstract

Cited by 27 (13 self)
 Add to MetaCart
(Show Context)
Summary. We present the first class of mathematically rigorous, general, fully selfreferential, selfimproving, optimally efficient problem solvers. Inspired by Kurt Gödel’s celebrated selfreferential formulas (1931), such a problem solver rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problemdependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable selfrewrite. We show that such a selfrewrite is globally optimal—no local maxima!—since the code first had to prove that it is not useful to continue the proof search for alternative selfrewrites. Unlike previous nonselfreferential methods based on hardwired proof searchers, ours not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()notation, provided the utility of such speedups is provable at all. 1
Ultimate Cognition à la Gödel
 COGN COMPUT
, 2009
"... "All life is problem solving," said Popper. To deal with arbitrary problems in arbitrary environments, an ultimate cognitive agent should use its limited hardware in the "best" and "most efficient" possible way. Can we formally nail down this informal statement, and der ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
"All life is problem solving," said Popper. To deal with arbitrary problems in arbitrary environments, an ultimate cognitive agent should use its limited hardware in the "best" and "most efficient" possible way. Can we formally nail down this informal statement, and derive a mathematically rigorous blueprint of ultimate cognition? Yes, we can, using Kurt Gödel’s celebrated selfreference trick of 1931 in a new way. Gödel exhibited the limits of mathematics and computation by creating a formula that speaks about itself, claiming to be unprovable by an algorithmic theorem prover: either the formula is true but unprovable, or math itself is flawed in an algorithmic sense. Here we describe an agentcontrolling program that speaks about itself, ready to rewrite itself in arbitrary fashion once it has found a proof that the rewrite is useful according to a userdefined utility function. Any such a rewrite is necessarily globally optimal—no local maxima!—since this proof necessarily must have demonstrated the uselessness of continuing the proof search for even better rewrites. Our selfreferential program will optimally speed up its proof searcher and other program parts, but only if the speed up’s utility is indeed provable—even ultimate cognition has limits of the Gödelian kind.
Resource analysis by supinterpretation
 In FLOPS 2006, volume 3945 of LNCS
, 2006
"... Abstract. We propose a new method to control memory resources by static analysis. For this, we introduce the notion of supinterpretation which bounds from above the size of function outputs. This method applies to first order functional programming with pattern matching. This work is related to qua ..."
Abstract

Cited by 21 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new method to control memory resources by static analysis. For this, we introduce the notion of supinterpretation which bounds from above the size of function outputs. This method applies to first order functional programming with pattern matching. This work is related to quasiinterpretations but we are now able to determine resources of more algorithms and it is easier to perform an analysis with this new tools. 1
On the Difficulty of Computations
, 1970
"... Two practical considerations concerning the use of computing machinery are the amount of information that must be given to the machine for it to perform a given task and the time it takes the machine to perform it. The size of programs and their running time are studied for mathematical models of co ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Two practical considerations concerning the use of computing machinery are the amount of information that must be given to the machine for it to perform a given task and the time it takes the machine to perform it. The size of programs and their running time are studied for mathematical models of computing machines. The study of the amount of information (i.e., number of bits) in a computer program needed for it to put out a given finite binary sequence leads to a definition of a random sequence; the random sequences of a given length are those that require the longest programs. The study of the running time of programs for computing infinite sets of natural numbers leads to an arithmetic of computers, which is a distributive lattice.