Results 1 
8 of
8
Towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions
 Proceedings of the 12 th Eurpean Conference on Machine Learning (ECML2001
, 2001
"... Abstract. Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown distributions. We unify both theories an ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
Abstract. Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown distributions. We unify both theories and give strong arguments that the resulting universal AIξ model behaves optimally in any computable environment. The major drawback of the AIξ model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIξ tl, which is still superior to any other time t and length l bounded agent. The computation time of AIξ tl is of the order t·2 l. 1
Universal Algorithmic Intelligence: A mathematical topdown approach
 Artificial General Intelligence
, 2005
"... Artificial intelligence; algorithmic probability; sequential decision theory; rational ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
Artificial intelligence; algorithmic probability; sequential decision theory; rational
Convergence and Error Bounds for Universal Prediction of Nonbinary Sequences
 Proceedings of the 12th Eurpean Conference on Machine Learning (ECML2001
, 2001
"... Solomonoff's uncomputable universal prediction scheme ß allows to predict the next symbol x k of a sequence x 1 ...x k1 for any Turing computable, but otherwise unknown, probabilistic environment µ . This scheme will be generalized to arbitrary environmental classes, which, among others ..."
Abstract

Cited by 21 (15 self)
 Add to MetaCart
Solomonoff's uncomputable universal prediction scheme ß allows to predict the next symbol x k of a sequence x 1 ...x k1 for any Turing computable, but otherwise unknown, probabilistic environment µ . This scheme will be generalized to arbitrary environmental classes, which, among others, allows the construction of computable universal prediction schemes ß . Convergence of ß to µ in a conditional mean squared sense and with µ probability 1 is proven. It is shown that the average number of prediction errors made by the universal ß scheme rapidly converges to those made by the best possible informed µ scheme. The schemes, theorems and proofs are given for general finite alphabet, which results in additional complications as compared to the binary case. Several extensions of the presented theory and results are outlined. They include general loss functions and bounds, games of chance, infinite alphabet, partial and delayed prediction, classification, and more active systems.
A Theory of Universal Artificial Intelligence based on Algorithmic Complexity
, 2000
"... Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas an ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIXI model can formally solve them. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXItl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIXItl is of the order t·2^l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches.
General Loss Bounds for Universal Sequence Prediction
, 2001
"... The Bayesian framework is ideally suited for induction problems. The probability of observing $x_k$ at time $k$, given past observations $x_1...x_{k1}$ can be computed with Bayes' rule if the true distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. The problem, however, is that in many cas ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
The Bayesian framework is ideally suited for induction problems. The probability of observing $x_k$ at time $k$, given past observations $x_1...x_{k1}$ can be computed with Bayes' rule if the true distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. The problem, however, is that in many cases one does not even have a reasonable estimate of the true distribution. In order to overcome this problem a universal distribution $\xi$ is defined as a weighted sum of distributions $\mu_i\in M$, where $M$ is any countable set of distributions including $\mu$. This is a generalization of Solomonoff induction, in which $M$ is the set of all enumerable semimeasures. Systems which predict $y_k$, given $x_1...x_{k1}$ and which receive loss $l_{x_k y_k}$ if $x_k$ is the true next symbol of the sequence are considered. It is proven that using the universal $\xi$ as a prior is nearly as good as using the unknown true distribution $\mu$. Furthermore, games of chance, defined as a sequence of bets, observations, and rewards are studied. The time needed to reach the winning zone is estimated. Extensions to arbitrary alphabets, partial and delayed prediction, and more active systems are discussed.
Speculations on Biology, Information and Complexity
, 2006
"... It would be nice to have a mathematical understanding of basic biological concepts and to be able to prove that life must evolve in very general circumstances. At present we are far from being able to do this. But I’ll discuss some partial steps in this direction plus what I regard as a possible fut ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
It would be nice to have a mathematical understanding of basic biological concepts and to be able to prove that life must evolve in very general circumstances. At present we are far from being able to do this. But I’ll discuss some partial steps in this direction plus what I regard as a possible future line of attack. Can Darwinian evolution be made into a mathematical theory? Is there a fundamental mathematical theory for biology? Darwin = math?! In 1960 the physicist Eugene Wigner published a paper with a wonderful title, “The unreasonable effectiveness of mathematics in the natural sciences. ” In this paper he marveled at the miracle that pure mathematics is so often extremely useful in theoretical physics. To me this does not seem so marvelous, since mathematics and physics coevolved. That however does not diminish the miracle that at a fundamental level Nature is ruled by
A gentle introduction to the universal algorithmic agent AIXI
 Real AI: New Approaches to Arti General Intelligence
, 2003
"... Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas an ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIXI model can formally solve them. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXItl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIXItl is of the order t·2^l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches.
Convergence and Error Bounds for Universal Prediction
, 2001
"... finite nonbinary alphabet; convergence; error bounds; games of chance; partial and delayed prediction; classification. Solomonoff’s uncomputable universal prediction scheme ξ allows to predict the next symbol xk of a sequence x1...xk−1 for any Turing computable, but otherwise unknown, probabilistic ..."
Abstract
 Add to MetaCart
finite nonbinary alphabet; convergence; error bounds; games of chance; partial and delayed prediction; classification. Solomonoff’s uncomputable universal prediction scheme ξ allows to predict the next symbol xk of a sequence x1...xk−1 for any Turing computable, but otherwise unknown, probabilistic environment µ. This scheme will be generalized to arbitrary environmental classes, which, among others, allows the construction of computable universal prediction schemes ξ. Convergence of ξ to µ in a conditional mean squared sense and with µ probability 1 is proven. It is shown that the average number of prediction errors made by the universal ξ scheme rapidly converges to those made by the best possible informed µ scheme. The schemes, theorems and proofs are given for general finite alphabet, which results in additional complications as compared to the binary case. Several extensions of the presented theory and results are outlined. They include general loss functions and bounds, games of chance, infinite alphabet, partial and delayed prediction, classification, and more active systems. 1 This work was supported by SNF grant 200061847.00 to Jürgen Schmidhuber. 1