Results 1 
6 of
6
Universal intelligence: A definition of machine intelligence
 Minds and Machines
, 2007
"... A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of ..."
Abstract

Cited by 42 (11 self)
 Add to MetaCart
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
What is a Random Sequence
 The Mathematical Association of America, Monthly
, 2002
"... there laws of randomness? These old and deep philosophical questions still stir controversy today. Some scholars have suggested that our difficulty in dealing with notions of randomness could be gauged by the comparatively late development of probability theory, which had a ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
there laws of randomness? These old and deep philosophical questions still stir controversy today. Some scholars have suggested that our difficulty in dealing with notions of randomness could be gauged by the comparatively late development of probability theory, which had a
Is there an Elegant Universal Theory of Prediction?
 IDSIA / USISUPSI DALLE MOLLE INSTITUTE FOR ARTIFICIAL INTELLIGENCE. GALLERIA 2, 6928
, 2006
"... Solomonoff’s inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and p ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Solomonoff’s inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and powerful theories of prediction which are computable. In this paper it is shown that although powerful algorithms exist, they are necessarily highly complex. This alone makes their theoretical analysis problematic, however it is further shown that beyond a moderate level of complexity the analysis runs into the deeper problem of Gödel incompleteness. This limits the power of mathematics to analyse and study prediction algorithms, and indeed intelligent systems in general.
RANDOM SCATTERING OF BITS BY PREDICTION
, 909
"... Abstract. We investigate a population of binary mistake sequences that result from learning with parametric models of di erent order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the ..."
Abstract
 Add to MetaCart
Abstract. We investigate a population of binary mistake sequences that result from learning with parametric models of di erent order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the learner's information density parameter which is de ned as the ratio between the lengths of the compressed to uncompressed les that contain the learner's decision rule. The results indicate that good learners have a low information densityρ while bad learners have a high ρ. Bad learners generate atypically chaotic mistake sequences while good learners generate typically chaotic sequences that divide into two subgroups: the rst consists of the typically stochastic sequences (with low divergence) which includes the sequences generated by the Bayes optimal predictor. The second subgroup consists of the atypically stochastic (but still typically chaotic) sequences that deviate from truly random Bernoulli sequences. Based on the static algorithmic interference model of [15] the learner here acts as a static structure which scatters the
and biological creativity
, 2010
"... We present an informationtheoretic analysis of Darwin’s theory of evolution, modeled as a hillclimbing algorithm on a fitness landscape. Our space of possible organisms consists of computer programs, which are subjected to random mutations. We study the random walk of increasing fitness made by a ..."
Abstract
 Add to MetaCart
We present an informationtheoretic analysis of Darwin’s theory of evolution, modeled as a hillclimbing algorithm on a fitness landscape. Our space of possible organisms consists of computer programs, which are subjected to random mutations. We study the random walk of increasing fitness made by a single mutating organism. In two different models we are able to show that evolution will occur and to characterize the rate of evolutionary progress, i.e., the rate of biological creativity. Key words and phrases: metabiology, evolution of mutating software, random walks in software space, algorithmic information theory 1