Results 1  10
of
31
Beyond the Turing Test
 J. Logic, Language & Information
"... Abstract. We define the main factor of intelligence as the ability to comprehend, formalising this ability with the help of new constructs based on descriptional complexity. The result is a comprehension test, or Ctest, exclusively defined in terms of universal descriptional machines (e.g universal ..."
Abstract

Cited by 33 (18 self)
 Add to MetaCart
Abstract. We define the main factor of intelligence as the ability to comprehend, formalising this ability with the help of new constructs based on descriptional complexity. The result is a comprehension test, or Ctest, exclusively defined in terms of universal descriptional machines (e.g universal Turing machines). Despite the absolute and nonanthropomorphic character of the test it is equally applicable to both humans and machines. Moreover, it correlates with classical psychometric tests, thus establishing the first firm connection between information theoretic notions and traditional IQ tests. The Turing Test is compared with the Ctest and their joint combination is discussed. As a result, the idea of the Turing Test as a practical test of intelligence should be left behind, and substituted by computational and factorial tests of different cognitive abilities, a much more useful approach for artificial intelligence progress and for many other intriguing questions that are presented beyond the Turing Test.
A Formal Definition of Intelligence Based on an Intensional Variant of Algorithmic Complexity
 In Proceedings of the International Symposium of Engineering of Intelligent Systems (EIS'98
, 1998
"... Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] with two remarkable features inherited from its objectoriented coding in C++: it is easily tunable for our needs, and it is efficient. We have made it even more reduced, removing any operand in the instruction set, even for the loop operations. We have only three registers which are AX (the accumulator), BX and CX. The operations Q b we have used for our experiment are in Table 1: LOOPTOP Decrements CX. If it is not equal to the first element jump to the program top.
Inapproximability of combinatorial optimization problems
 Electronic Colloquium on Computational Complexity
, 2004
"... ..."
Algorithmic Complexity and Stochastic Properties of Finite Binary Sequences
, 1999
"... This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexit ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexity, A. N. Kolmogorov's leading motive was developing on its basis a mathematical theory more adequately substantiating applications of probability theory, mathematical statistics and information theory. Kolmogorov wanted to deduce properties of a random object from its complexity characteristics without use of the notion of probability. In the first part of this paper we present several results in this direction. Though the subsequent development of algorithmic complexity and randomness was different, algorithmic complexity has successful applications in a traditional probabilistic framework. In the second part of the paper we consider applications to the estimation of parameters and the definition of Bernoulli sequences. All considerations have finite combinatorial character. 1.
Satisfiability Allows No Nontrivial Sparsification Unless The PolynomialTime Hierarchy Collapses
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 38 (2010)
, 2010
"... Consider the following twoplayer communication process to decide a language L: The first player holds the entire input x but is polynomially bounded; the second player is computationally unbounded but does not know any part of x; their goal is to cooperatively decide whether x belongs to L at small ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Consider the following twoplayer communication process to decide a language L: The first player holds the entire input x but is polynomially bounded; the second player is computationally unbounded but does not know any part of x; their goal is to cooperatively decide whether x belongs to L at small cost, where the cost measure is the number of bits of communication from the first player to the second player. For any integer d ≥ 3 and positive real ǫ we show that if satisfiability for nvariable dCNF formulas has a protocol of cost O(n d−ǫ) then coNP is in NP/poly, which implies that the polynomialtime hierarchy collapses to its third level. The result even holds when the first player is conondeterministic, and is tight as there exists a trivial protocol for ǫ = 0. Under the hypothesis that coNP is not in NP/poly, our result implies tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. By reduction, similar results hold for other NPcomplete problems. For the vertex cover problem on nvertex duniform hypergraphs, the above statement holds for any integer d ≥ 2. The case d = 2 implies that no NPhard vertex deletion problem based on a graph property that is inherited by subgraphs can have kernels consisting of O(k 2−ǫ) edges unless coNP is in NP/poly, where k denotes the size of the deletion set. Kernels consisting of O(k 2) edges are known for several problems in the class, including vertex cover, feedback vertex set, and boundeddegree deletion.
Constructive Reinforcement Learning
 International Journal of Intelligent Systems, Wiley
"... This paper presents an operative measure of reinforcement for constructive learning methods, i.e., eager learning methods using highly expressible (or universal) representation languages. These evaluation tools allow a further insight in the study of the growth of knowledge, theory revision and abdu ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
This paper presents an operative measure of reinforcement for constructive learning methods, i.e., eager learning methods using highly expressible (or universal) representation languages. These evaluation tools allow a further insight in the study of the growth of knowledge, theory revision and abduction. The final approach is based on an apportionment of credit wrt. the ‘course ’ that the evidence makes through the learnt theory. Our measure of reinforcement is shown to be justified by crossvalidation and by the connection with other successful evaluation criteria, like the MDL principle. Finally, the relation with the classical view of reinforcement is studied, where the actions of an intelligent system can be rewarded or penalised, and we discuss whether this should affect our distribution of reinforcement. The most important result of this paper is that the way we distribute reinforcement into knowledge results in a rated ontology, instead of a single prior distribution. Therefore, this detailed information can be exploited for guiding the space search of inductive learning algorithms. Likewise, knowledge revision may be done to the part of the theory which is not justified by the
On the computational measurement of intelligence factors
 National Institute of Standards and Technology
, 2000
"... In this paper we develop a computational framework for the measurement of different factors or abilities which are usually found in intelligent behaviours. For this, we first develop a scale for measuring the complexity of an instance of a problem, depending on the descriptional complexity (Levin LT ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
In this paper we develop a computational framework for the measurement of different factors or abilities which are usually found in intelligent behaviours. For this, we first develop a scale for measuring the complexity of an instance of a problem, depending on the descriptional complexity (Levin LT variant) of the ‘explanation ’ of the answer to the problem. We centre on the establishment of either deductive and inductive abilities, and we show that their evaluation settings are special cases of the general framework. Some classical dependencies between them are shown and a way to separate these dependencies is developed. Finally, some variants of the previous factors and other possible ones to be taken into account are investigated. In the end, the application of these measurements for the evaluation of AI progress is discussed.
Universal Semantic Communication I
, 2008
"... Is it possible for two intelligent beings to communicate meaningfully, without any common language or background? This question has interest on its own, but is especially relevant in the context of modern computational infrastructures where an increase in the diversity of computers is making the tas ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
Is it possible for two intelligent beings to communicate meaningfully, without any common language or background? This question has interest on its own, but is especially relevant in the context of modern computational infrastructures where an increase in the diversity of computers is making the task of intercomputer interaction increasingly burdensome. Computers spend a substantial amount of time updating their software to increase their knowledge of other computing devices. In turn, for any pair of communicating devices, one has to design software that enables the two to talk to each other. Is it possible instead to let the two computing entities use their intelligence (universality as computers) to learn each others ’ behavior and attain a common understanding? What is “common understanding? ” We explore this question in this paper. To formalize this problem, we suggest that one should study the “goal of communication: ” why are the two entities interacting with each other, and what do they hope to gain by it? We propose that by considering this question explicitly, one can make progress on the question of universal communication. We start by considering a computational setting for the problem where the goal of one of the interacting players is to gain some computational wisdom from the other player. We show that if the second player is “sufficiently ” helpful and powerful, then the first player can gain significant computational power (deciding PSPACE complete languages). Our work highlights some of the definitional issues underlying the task of formalizing universal communication, but also suggests some interesting phenomena and highlights potential tools that may be used for such communication.
Computing over the Reals: Where Turing meets Newton
 Notices of the American Mathematical Society
, 2004
"... The classical (Turing) theory of computation has been extraordinarily successful in providing the foundations and framework for theoretical computer science. Yet its dependence on 0s and 1s is fundamentally inadequate for providing such a foundation for modern scientific computation, in which most a ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The classical (Turing) theory of computation has been extraordinarily successful in providing the foundations and framework for theoretical computer science. Yet its dependence on 0s and 1s is fundamentally inadequate for providing such a foundation for modern scientific computation, in which most algorithms—with origins in Newton, Euler, Gauss, et al.—are real number algorithms. In 1989, Mike Shub, Steve Smale, and I introduced a theory of computation and complexity over an arbitrary ring or field R [BSS89]. If R is Z2 = ({0, 1},+,·), the classical computer science theory is recovered. If R is the field of real numbers R, Newton’s algorithm, the paradigm algorithm of numerical analysis, fits naturally into our model of computation. Complexity classes P, NP and the fundamental question “Does P=NP? ” can be formulated naturally over an arbitrary ring R. The answer to the fundamental question depends in general on the complexity of deciding feasibility of polynomial systems over R. When R is Z2, this becomes the classical satisfiability problem of Cook–Levin [Cook71, Levin73]. When R is the field of complex numbers C, the answer depends on the complexity of Hilbert’s Nullstellensatz. The notion of reduction between problems (e.g., between traveling salesman and satisfiability) has