Results 11  20
of
38
Compression and intelligence: social environments and communication
"... Abstract. Compression has been advocated as one of the principles which pervades inductive inference and prediction and, from there, it has also been recurrent in definitions and tests of intelligence. However, this connection is less explicit in new approaches to intelligence. In this paper, we ad ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. Compression has been advocated as one of the principles which pervades inductive inference and prediction and, from there, it has also been recurrent in definitions and tests of intelligence. However, this connection is less explicit in new approaches to intelligence. In this paper, we advocate that the notion of compression can appear again in definitions and tests of intelligence through the concepts of ‘mindreading’ and ‘communication ’ in the context of multiagent systems and social environments. Our main position is that twopart Minimum Message Length (MML) compression is not only more natural and effective for agents with limited resources, but it is also much more appropriate for agents in (cooperative) social environments than onepart compression schemes particularly those using a posteriorweighted mixture of all available models following Solomonoff’s theory of prediction. We think that the realisation of these differences is important to avoid a naive view of ‘intelligence as compression ’ in favour of a better understanding of how, why and where (onepart or twopart, lossless or lossy) compression is needed.
Turing Tests with Turing Machines
"... Comparative tests work by finding the difference (or the absence of difference) between a reference subject and an evaluee. The Turing Test, in its standard interpretation, takes (a subset of) the human species as a reference. Motivated by recent findings and developments in the area of machine inte ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Comparative tests work by finding the difference (or the absence of difference) between a reference subject and an evaluee. The Turing Test, in its standard interpretation, takes (a subset of) the human species as a reference. Motivated by recent findings and developments in the area of machine intelligence evaluation, we discuss what it would be like to have a Turing Test where the reference and the interrogator subjects are replaced by Turing Machines. This question sets the focus on several issues that are usually disregarded when dealing with the Turing Test, such as the degree of intelligence of reference and interrogator, the role of imitation (and not only prediction) in intelligence, its view from the perspective of game theory and others. Around these issues, this paper finally brings the Turing Test to the realm of Turing machines.
Randomness and Gödel's Theorem
, 1986
"... Complexity, nonpredictability and randomness not only occur in quantum mechanics and nonlinear dynamics, they also occur in pure mathematics and shed new light on the limitations of the axiomatic method. In particular, we discuss a Diophantine equation exhibiting randomness, and how it yields a pr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Complexity, nonpredictability and randomness not only occur in quantum mechanics and nonlinear dynamics, they also occur in pure mathematics and shed new light on the limitations of the axiomatic method. In particular, we discuss a Diophantine equation exhibiting randomness, and how it yields a proof of Godel's incompleteness theorem. Our view of the physical world has certainly changed radically during the past hundred years, as unpredictability, randomness and complexity have replaced the comfortable world of classical physics. Amazingly enough, the same thing has occurred in the world of pure mathematics, 2 G. J. Chaitin in fact, in number theory, a branch of mathematics that is concerned with the properties of the positive integers. How can an uncertainty principle apply to number theory, which has been called the queen of mathematics, and is a discipline that goes back to the ancient Greeks and is concerned with such things as the primes and their properties? Following Davis (...
Is there an Elegant Universal Theory of Prediction?
 IDSIA / USISUPSI DALLE MOLLE INSTITUTE FOR ARTIFICIAL INTELLIGENCE. GALLERIA 2, 6928
, 2006
"... Solomonoff’s inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and p ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Solomonoff’s inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and powerful theories of prediction which are computable. In this paper it is shown that although powerful algorithms exist, they are necessarily highly complex. This alone makes their theoretical analysis problematic, however it is further shown that beyond a moderate level of complexity the analysis runs into the deeper problem of Gödel incompleteness. This limits the power of mathematics to analyse and study prediction algorithms, and indeed intelligent systems in general.
Characterizing the Software Development Process: A New Approach Based on Kolmogorov Complexity
 in Computer Aided Systems Theory  EUROCAST’2001, 8th International Workshop on Computer Aided Systems Theory, ser. Lecture Notes in Computer Science
, 2001
"... Our main aim is to propose a new characterization for the software development process. We suggest that software development methodology has some limits. These limits are a clue that software development process is more subjective and empirical than objective and formal. We use Kolmogorov complexity ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Our main aim is to propose a new characterization for the software development process. We suggest that software development methodology has some limits. These limits are a clue that software development process is more subjective and empirical than objective and formal. We use Kolmogorov complexity to develop the formal argument and to outline the informal conclusions. Kolmogorov complexity is based on the size in bits of the smallest e ective description of an object and is a suitable quantitative measure of the object's information content.
Algorithmically Independent Sequences
, 2008
"... Two objects are independent if they do not affect each other. Independence is wellunderstood in classical information theory, but less in algorithmic information theory. Working in the framework of algorithmic information theory, the paper proposes two types of independence for arbitrary infinite bi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Two objects are independent if they do not affect each other. Independence is wellunderstood in classical information theory, but less in algorithmic information theory. Working in the framework of algorithmic information theory, the paper proposes two types of independence for arbitrary infinite binary sequences and studies their properties. Our two proposed notions of independence have some of the intuitive properties that one naturally expects. For example, for every sequence x, the set of sequences that are independent (in the weaker of the two senses) with x has measure one. For both notions of independence we investigate to what extent pairs of independent sequences, can be effectively constructed via Turing reductions (from one or more input sequences). In this respect, we prove several impossibility results. For example, it is shown that there is no effective way of producing from an arbitrary sequence with positive constructive Hausdorff dimension two sequences that are independent (even in the weaker type of independence) and have superlogarithmic complexity. Finally, a few conjectures and open questions are discussed.
Measuring Cognitive Abilities of Machines, Humans and NonHuman Animals in a Unified Way: towards Universal
, 2012
"... We present and develop the notion of ‘universal psychometrics ’ as a subject of study, and eventually a discipline, that focusses on the measurement of cognitive abilities for the machine kingdom, which comprises any kind of individual or collective, either artificial, biological or hybrid. Universa ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We present and develop the notion of ‘universal psychometrics ’ as a subject of study, and eventually a discipline, that focusses on the measurement of cognitive abilities for the machine kingdom, which comprises any kind of individual or collective, either artificial, biological or hybrid. Universal psychometrics can be built, of course, upon the experience, techniques and methodologies from (human) psychometrics, comparative cognition and related areas. Conversely, the perspective and techniques which are being developed in the area of machine intelligence measurement using (algorithmic) information theory can be of much broader applicability and implication outside artificial intelligence. This general approach to universal psychometrics spurs the reunderstanding of most (if not all) of the big issues about the measurement of cognitive abilities, and creates a new foundation for (re)defining and mathematically formalising the concept of cognitive task, evaluable subject, interface, task choice, difficulty, agent response curves, etc. We introduce the notion of a universal cognitive test and discuss whether (and when) it may be necessary for exploring the machine kingdom. On the issue of intelligence and very general abilities, we also get some results and connections with the related notions of nofreelunch theorems and universal priors. 1
Lisp ProgramSize Complexity II
, 1992
"... We present the informationtheoretic incompleteness theorems that arise in a theory of programsize complexity based on something close to real LISP. The complexity of a formal axiomatic system is defined to be the minimum size in characters of a LISP definition of the proofchecking function associa ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We present the informationtheoretic incompleteness theorems that arise in a theory of programsize complexity based on something close to real LISP. The complexity of a formal axiomatic system is defined to be the minimum size in characters of a LISP definition of the proofchecking function associated with the formal system. Using this concrete and easy to understand definition, we show (a) that it is difficult to exhibit complex Sexpressions, and (b) that it is difficult to determine the bits of the LISP halting probability\Omega LISP . We also construct improved versions\Omega 0 LISP and\Omega 00 LISP of the LISP halting probability that asymptotically have maximum possible LISP complexity. Copyright c fl 1992, Elsevier Science Publishing Co., Inc., reprinted by permission. 2 G. J. Chaitin 1. Introduction The main incompleteness theorems of myAlgorithmic Information Theory monograph [1] are reformulated and proved here using a concrete and easytounderstand definition ...