## A Non-behavioural, Computational Extension to the Turing Test (1998)

Venue: | In International Conference on Computational Intelligence & Multimedia Applications (ICCIMA '98 |

Citations: | 33 - 18 self |

### BibTeX

@INPROCEEDINGS{Dowe98anon-behavioural,,

author = {David L. Dowe and Alan R},

title = {A Non-behavioural, Computational Extension to the Turing Test},

booktitle = {In International Conference on Computational Intelligence & Multimedia Applications (ICCIMA '98},

year = {1998},

pages = {101--106}

}

### OpenURL

### Abstract

We also ask the following question: Given two programs H1 and H2 respectively of lengths l1 and l2, l1! l2, if H1 and H2 perform equally well (to date) on a Turing Test, which, if either, should be preferred for the future? We also set a challenge. If humans can presume intelligence in their ability to set the Turing test, then we issue the additional challenge to researchers to get machines to administer the Turing Test.

### Citations

498 |
Stochastic Complexity
- Rissanen
- 1989
(Show Context)
Citation Context ...o carry out statistical and inductive inference was suggested in the 1960s[14, 2, 19] and has been successfully implemented in Minimum Message Length (MML)[19, 22] and Minimum Description Length (MDL)=-=[12]-=- applications ever since, both of which are related to Kolmogorov complexity[17, 4, 8]. For the reader possibly unfamiliar3 with MML and MDL4, consider firstly a set of data involving two variables, x... |

404 |
A formal theory of inductive inference
- Solomonoff
- 1964
(Show Context)
Citation Context ...e[23] for learning languages, but we wish to propose it for all inductive learning. The idea of using notions of compression to carry out statistical and inductive inference was suggested in the 1960s=-=[14, 2, 19]-=- and has been successfully implemented in Minimum Message Length (MML)[19, 22] and Minimum Description Length (MDL)[12] applications ever since, both of which are related to Kolmogorov complexity[17, ... |

311 |
An Information Measure for Classification
- Wallace, Boulton
- 1968
(Show Context)
Citation Context ...e[23] for learning languages, but we wish to propose it for all inductive learning. The idea of using notions of compression to carry out statistical and inductive inference was suggested in the 1960s=-=[14, 2, 19]-=- and has been successfully implemented in Minimum Message Length (MML)[19, 22] and Minimum Description Length (MDL)[12] applications ever since, both of which are related to Kolmogorov complexity[17, ... |

205 |
Minimum complexity density estimation
- Barron, Cover
- 1991
(Show Context)
Citation Context ...own as unsupervised concept learning or mixture modelling. MML is also invariant under parameter transformation[22, 21, 4], and MDL and MML are guaranteed to converge with probability unity [22, p241]=-=[18, 1]-=- to the correct inference. These methods are also efficient, converging as quickly as possible. 5Put another way, consider a variety of hypothesis, H, for explaining some data, D. By repeated applicat... |

187 |
Estimation and inference by compact coding
- Wallace, Freeman
- 1987
(Show Context)
Citation Context ...g. The idea of using notions of compression to carry out statistical and inductive inference was suggested in the 1960s[14, 2, 19] and has been successfully implemented in Minimum Message Length (MML)=-=[19, 22]-=- and Minimum Description Length (MDL)[12] applications ever since, both of which are related to Kolmogorov complexity[17, 4, 8]. For the reader possibly unfamiliar3 with MML and MDL4, consider firstly... |

34 |
Intrinsic Classification by MML – the snob program
- Wallace, Dowe
- 1994
(Show Context)
Citation Context ...MML is a Bayesian method of inductive and statistical inference and machine learning. MDL and MML are universally applicable to inference problems, such as problems of statistical parameter estimation=-=[19, 22, 20, 18, 21]-=- and problems of intrinsic classification[19, 21], also known as unsupervised concept learning or mixture modelling. MML is also invariant under parameter transformation[22, 21, 4], and MDL and MML ar... |

18 |
On the length of programs for computing finite sequences
- Chaitin
- 1966
(Show Context)
Citation Context ...e[23] for learning languages, but we wish to propose it for all inductive learning. The idea of using notions of compression to carry out statistical and inductive inference was suggested in the 1960s=-=[14, 2, 19]-=- and has been successfully implemented in Minimum Message Length (MML)[19, 22] and Minimum Description Length (MDL)[12] applications ever since, both of which are related to Kolmogorov complexity[17, ... |

18 |
programs
- Minds
- 1980
(Show Context)
Citation Context ...ate a satisfactory response at every point in the game tree that the succession of remarks leads it to, and the human tries to catch the machine out (or concedes that (s)he can't catch it out). Searle=-=[13]-=- gives a parallel example in which, instead of a machine trying to simulate humanness, a human endeavours to simulate an operational understanding of Chinese1. This involves a human operator with no k... |

18 |
MML estimation of the von Mises concentration parameter
- Wallace, Dowe
- 1993
(Show Context)
Citation Context ...MML is a Bayesian method of inductive and statistical inference and machine learning. MDL and MML are universally applicable to inference problems, such as problems of statistical parameter estimation=-=[19, 22, 20, 18, 21]-=- and problems of intrinsic classification[19, 21], also known as unsupervised concept learning or mixture modelling. MML is also invariant under parameter transformation[22, 21, 4], and MDL and MML ar... |

17 | Probabilistic reasoning as information compression by multiple alignment, unification and search: an introduction and overview
- Wolff
- 1999
(Show Context)
Citation Context ...nductive Learning = compression We wish to put forward the view that learning from some body of data is typically an act of compression of that data. Such a theory has been explicitly stated elsewhere=-=[23]-=- for learning languages, but we wish to propose it for all inductive learning. The idea of using notions of compression to carry out statistical and inductive inference was suggested in the 1960s[14, ... |

12 |
Information-theoretic football tipping
- Dowe, Farr, et al.
- 1996
(Show Context)
Citation Context ...H and then D given H) and one-part compression amount to the differences between MML[19, 22] and MDL[12], and are very small. Inductive learning is equivalent to two-part compression; and (see, e.g., =-=[3]-=-) prediction and one-part compression are equivalent. 4sinitial social pleasantries8. Our look-up table would thus need at least 1015 entries so that a response could be made to the first non-trivial ... |

12 |
The Discovery of Algorithmic Probability: A Guide for the Programming of True Creativity
- Solomonoff
- 1995
(Show Context)
Citation Context ...a certain foreign language moderately well, though not as well; and there are perhaps many foreign languages that you do not understand much at all. Correspondingly, you have implicitly or 3References=-=[22, 15, 12]-=- are suggested. 4MML is a Bayesian method of inductive and statistical inference and machine learning. MDL and MML are universally applicable to inference problems, such as problems of statistical par... |

8 |
Computing machinery and intelligence, in "Mind
- TURING
- 1950
(Show Context)
Citation Context ...deductive learning, and let us focus instead on inductive learning. We believe that this form of learning consists in the ability to compress data. Turing introduced his famous test, "the Turing Test"=-=[16]-=-, of (artificial) intelligence by proposing that the agent be tested for the ability to simulate by tele-type the conversational actions of a human. One possible way for a machine to carry out such a ... |

7 |
False Oracles and SMML Estimators
- Wallace
- 1989
(Show Context)
Citation Context ...MML is a Bayesian method of inductive and statistical inference and machine learning. MDL and MML are universally applicable to inference problems, such as problems of statistical parameter estimation=-=[19, 22, 20, 18, 21]-=- and problems of intrinsic classification[19, 21], also known as unsupervised concept learning or mixture modelling. MML is also invariant under parameter transformation[22, 21, 4], and MDL and MML ar... |

4 |
Astrophysical Formulae: a Compendium for the Physicist and Astrophysicist
- Lang
- 1974
(Show Context)
Citation Context ...the fact that sensible sentences can have many more than five characters more than compensates. 9we are grateful to Kurt Liffman for showing us calculations of how to use the critical particle density=-=[6]-=- threshold to derive a figure closely approximating this oft-stated result. 10Note firstly that (1015)6 = 1090 ? 1083. So if all sequences of input sentences were possible, only six consecutive inputs... |

3 |
The tourism system
- Mill, Morrison
- 1985
(Show Context)
Citation Context ...above still further. The universe apparently contains certain patterns; if these patterns are pervasive enough, they are good candidates for being "laws" of nature. Let us follow in the spirit of Mill=-=[9]-=-, Ramsey[11] and Lewis[7], and regard the "laws" of nature as those (inferred, hypothesised) regularities that figure in the minimum message length6 description of the universe. It is plausible that u... |

1 |
Strict MML and Kolmogorov Complexity. to appear. [5] H.J. Eysenck. Know your own I.Q
- Dowe, Wallace
- 1962
(Show Context)
Citation Context ..., 19] and has been successfully implemented in Minimum Message Length (MML)[19, 22] and Minimum Description Length (MDL)[12] applications ever since, both of which are related to Kolmogorov complexity=-=[17, 4, 8]-=-. For the reader possibly unfamiliar3 with MML and MDL4, consider firstly a set of data involving two variables, x1 and x2 (as it might be, force and acceleration). We begin with a long data string co... |

1 |
8] Ming Li and P.M.B. Vit'anyi. An Introduction to Kolmogorov Complexity and its applications
- Counterfactuals
- 1973
(Show Context)
Citation Context ...universe apparently contains certain patterns; if these patterns are pervasive enough, they are good candidates for being "laws" of nature. Let us follow in the spirit of Mill[9], Ramsey[11] and Lewis=-=[7]-=-, and regard the "laws" of nature as those (inferred, hypothesised) regularities that figure in the minimum message length6 description of the universe. It is plausible that understanding the universe... |