Results 1  10
of
14
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
 IEEE Transactions on Information Theory
, 1998
"... The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic conditi ..."
Abstract

Cited by 78 (8 self)
 Add to MetaCart
The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's mi...
The Modal Logic of the Countable Random Frame
"... We study the modal logic ML r of the countable random frame, which is contained in and `approximates' the modal logic of almost sure frame validity, i.e. the logic of those modal principles which are valid with asymptotic probability 1 in a randomly chosen finite frame. We give a sound and comp ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We study the modal logic ML r of the countable random frame, which is contained in and `approximates' the modal logic of almost sure frame validity, i.e. the logic of those modal principles which are valid with asymptotic probability 1 in a randomly chosen finite frame. We give a sound and complete axiomatization of ML r and show that it is not finitely axiomatizable. Then we describe the finite frames of that logic and show that it has the nite frame property and its satisfiability problem is in EXPSPACE. All these results easily extend to temporal and other multimodal logics. Finally, we show that there are modal formulas which are almost surely valid in the finite, yet fail in the countable random frame, and hence do not follow from the extension axioms. Therefore the analog of Fagin's transfer theorem for almost sure validity in firstorder logic fails for modal logic.
On the foundations of statistics: A frequentist approach
, 1998
"... A limited but basic problem in the foundations of statistics is the following: Given a parametric model, given perhaps some observations from the model, but given no prior information about the parameters ("total ignorance"), what can we say about the occurrence of a specified event A unde ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
A limited but basic problem in the foundations of statistics is the following: Given a parametric model, given perhaps some observations from the model, but given no prior information about the parameters ("total ignorance"), what can we say about the occurrence of a specified event A under this model in the future (prediction problem)? Or, as probabilities are often described in terms of bets, how can we bet on A? Bayesian solutions are internally consistent and fully conditional on the observed data, but their ties to the observed reality and their frequentist properties can be arbitrarily bad (unless, of course, the assumed prior distribution happens to be the true prior). Frequentist solutions are generally not possible with ordinary probabilities; but it is possible to define "successful bets" (using upper and lower probabilities), which even lead out of the state of total ignorance in an objective learning process converging to the true probability model. A special variant (su...
Scientific Explanation: A Critical Survey
 Foundation of Science I/3: 429
, 1995
"... ..."
(Show Context)
A Formal Approach to SpecificationBased BlackBox Testing
 In Proceedings of the Workshop on Modelling Software System Structures in a Fastly Moving Scenario
"... This paper introduces an initial account of a formal methodology for specificationbased blackbox verification testing of software artefacts against their specifications, as well as for validation testing of specifications against the socalled application concept [14] ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
This paper introduces an initial account of a formal methodology for specificationbased blackbox verification testing of software artefacts against their specifications, as well as for validation testing of specifications against the socalled application concept [14]
Induction and the Organization of Knowledge
, 1994
"... This chapter investigates various forms of the induction principle in view of discussing the importance of a wellknown objection to induction, namely Hempel's paradox. This paradox arises because an inductive hypothesis can be confirmed as well by its instances (e.g., confirming that all crows ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This chapter investigates various forms of the induction principle in view of discussing the importance of a wellknown objection to induction, namely Hempel's paradox. This paradox arises because an inductive hypothesis can be confirmed as well by its instances (e.g., confirming that all crows are black by actually seeing a black crow) as by "negative" information (e.g., confirming that all crows are black by actually seeing a nonblack noncrow entity such as a white shoe). In spite of its apparent simplicity, this paradox is still an important issue that should be considered by systems performing unsupervised learning since they are prone to confirm their hypotheses by such irrelevant negative information. It is argued here that avoiding Hempel's paradox can be achieved by a tight integration of the statistical findings together with a clear organization of knowledge. Progressive organization of knowledge plays an essential role in the inductive growth of new scientific theories. In...
The Knowledge Engineering Review An Al view of the treatment of uncertainty
"... This paper reviews many of the very varied concepts of uncertainty used in AI. Because of their great popularity and generality "parallel certainty inference " techniques, socalled, are prominently in the foreground. We illustrate and comment in detail on three of these techniques; Bayes ..."
Abstract
 Add to MetaCart
This paper reviews many of the very varied concepts of uncertainty used in AI. Because of their great popularity and generality "parallel certainty inference " techniques, socalled, are prominently in the foreground. We illustrate and comment in detail on three of these techniques; Bayes ' theory (section 2); DempsterShafer theory (section 3); Cohen's model of endorsements (section 4), and give an account of the debate that has arisen around each of them. Techniques of a different kind (such as Zadeh's fuzzysets, fuzzylogic theory, and the use of nonstandard logics and methods that manage uncertainty without explicitly dealing with it) may be seen in the background (section 5). The discussion of technicalities is accompanied by a historical and philosophical excursion on the nature and the use of uncertainty (section 1), and by a brief discussion of the problem of choosing an adequate AI approach to the treatment of uncertainty (section 6). The aim of the paper is to highlight the complex nature of uncertainty and to argue for an openminded attitude towards its representation and use. In this spirit the pros and cons of uncertainty treatment techniques are presented in order to reflect the various uncertainty types. A guide to the literature in the field, and an extensive bibliography are appended.
Obituary Ray Solomonoff, Founding Father of Algorithmic Information Theory
"... algorithms ..."
(Show Context)