Results 1  10
of
30
Minimum Message Length and Kolmogorov Complexity
 Computer Journal
, 1999
"... this paper is to describe some of the relationships among the different streams and to try to clarify some of the important differences in their assumptions and development. Other studies mentioning the relationships appear in [1, Section IV, pp. 10381039], [2, sections 5.2, 5.5] and [3, p. 465] ..."
Abstract

Cited by 127 (29 self)
 Add to MetaCart
(Show Context)
this paper is to describe some of the relationships among the different streams and to try to clarify some of the important differences in their assumptions and development. Other studies mentioning the relationships appear in [1, Section IV, pp. 10381039], [2, sections 5.2, 5.5] and [3, p. 465]
A tutorial introduction to the minimum description length principle
 in Advances in Minimum Description Length: Theory and Applications. 2005
"... ..."
(Show Context)
MML clustering of multistate, Poisson, von Mises circular and Gaussian distributions
 Statistics Computing
, 2000
"... Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also statistically consistent and efficient. We provide a brief overview of MML inductive inference ..."
Abstract

Cited by 39 (12 self)
 Add to MetaCart
(Show Context)
Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also statistically consistent and efficient. We provide a brief overview of MML inductive inference
Bayes not Bust! Why Simplicity is no Problem for Bayesians
, 2007
"... The advent of formal definitions of the simplicity of a theory has important implications for model selection. But what is the best way to define simplicity? Forster and Sober ([1994]) advocate the use of Akaike’s Information Criterion (AIC), a nonBayesian formalisation of the notion of simplicity. ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
The advent of formal definitions of the simplicity of a theory has important implications for model selection. But what is the best way to define simplicity? Forster and Sober ([1994]) advocate the use of Akaike’s Information Criterion (AIC), a nonBayesian formalisation of the notion of simplicity. This forms an important part of their wider attack on Bayesianism in the philosophy of science. We defend a Bayesian alternative: the simplicity of a theory is to be characterised in terms of Wallace’s Minimum Message Length (MML). We show that AIC is inadequate for many statistical problems where MML performs well. Whereas MML is always defined, AIC can be undefined. Whereas MML is not known ever to be statistically inconsistent, AIC can be. Even when defined and consistent, AIC performs worse than MML on small sample sizes. MML is statistically invariant under 1to1 reparametrisation, thus avoiding a common criticism of Bayesian approaches. We also show that MML provides answers to many of Forster’s objections to Bayesianism. Hence an important part of the attack on
Circular Clustering Of Protein Dihedral Angles By Minimum Message Length
 In Proceedings of the 1st Pacific Symposium on Biocomputing (PSB1
, 1996
"... this paper is given in [DADH95] and is available from ftp://www.cs.monash.edu.au/www/publications/1995/TR237.ps.Z.) Section 2introduces the MML principle and how it can be used for this circular clustering problem. The remaining sections give the results of the secondary structure groups [KaSa83] th ..."
Abstract

Cited by 15 (11 self)
 Add to MetaCart
this paper is given in [DADH95] and is available from ftp://www.cs.monash.edu.au/www/publications/1995/TR237.ps.Z.) Section 2introduces the MML principle and how it can be used for this circular clustering problem. The remaining sections give the results of the secondary structure groups [KaSa83] that resulted from applying Snob to cluster our dihedral angle data.
MML mixture modelling of multistate, Poisson, von Mises circular and Gaussian distributions
 In Proc. 6th Int. Workshop on Artif. Intelligence and Statistics
, 1997
"... Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also consistent and efficient. We provide a brief overview of MML inductive inference (Wallace and Boulton (1968), Wallace and Freeman (1987)), and how it has both an informationtheoretic and a Bayesian interp ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also consistent and efficient. We provide a brief overview of MML inductive inference (Wallace and Boulton (1968), Wallace and Freeman (1987)), and how it has both an informationtheoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture modelling program, Snob (Wallace and Boulton (1968), Wallace (1986), Wallace and Dowe(1994)) uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components. The message length is (to within a constant) the logarithm of the posterior probability of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated, and permits multivariate data from Gaussian, discrete multistate, Poisson and von Mises circular dist...
On Bayesian Estimation in Manifolds
, 2002
"... It is frequently stated that the maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimates of a continuous parameter # are not invariant to arbitrary "reparametrizations" of the parameter space #. This report clarifies the issues surrounding this problem, by pointing out t ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
It is frequently stated that the maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimates of a continuous parameter # are not invariant to arbitrary "reparametrizations" of the parameter space #. This report clarifies the issues surrounding this problem, by pointing out the difference between coordinate invariance, which is a sine qua non for a mathematically welldefined problem, and diffeomorphism invariance, which is a substantial issue, and provides a solution. We first show that the presence of a metric structure on # can be used to define coordinateinvariant MAP and MMSE estimates, and we argue that this is the natural and necessary way to proceed. The estimation problem and related geometrical quantities are all defined in a manifestly coordinateinvariant way. We show that the same MAP estimate results from `density maximization' or from using an invariantlydefined delta function loss. We then discuss the choice of a metric structure on #. By imposing an invariance criterion natural within a Bayesian framework, we show that this choice is essentially unique. It does not necessarily correspond to a choice of coordinates. The resulting MAP estimate coincides with the minimum message length (MML) estimate, but no discretization or approximation is used in its derivation.
Bayesian Estimation Of The Von Mises Concentration Parameter
 PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL WORKSHOP ON MAXIMUM ENTROPY AND BAYESIAN METHODS
"... The von Mises distribution is a maximum entropy distribution. It corresponds to the distribution of an angle of a compass needle in a uniform magnetic field of direction, , with concentration parameter, . The concentration parameter, , is the ratio of the field strength to the temperature of thermal ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
The von Mises distribution is a maximum entropy distribution. It corresponds to the distribution of an angle of a compass needle in a uniform magnetic field of direction, , with concentration parameter, . The concentration parameter, , is the ratio of the field strength to the temperature of thermal fluctuations. Previously, we obtained a Bayesian estimator for the von Mises distribution parameters using the informationtheoretic MinimumMessage Length (MML) principle. Here, we examine a variety of Bayesian estimation techniques by examining the posterior distribution in both polar and Cartesian coordinates. We compare the MML estimator with these fellow Bayesian techniques, and a range of Classical estimators. We find that the Bayesian estimators outperform the Classical estimators.
Bayesian Approaches to Segmenting a Simple Time Series
, 1997
"... The segmentation problem arises in many applications in data mining, A.I. and statistics. In this paper, we consider segmenting simple time series. We develop two Bayesian approaches for segmenting a time series, namely the Bayes Factor approach, and the Minimum Message Length (MML) approach. We per ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The segmentation problem arises in many applications in data mining, A.I. and statistics. In this paper, we consider segmenting simple time series. We develop two Bayesian approaches for segmenting a time series, namely the Bayes Factor approach, and the Minimum Message Length (MML) approach. We perform simulations comparing these Bayesian approaches, and then perform a comparison with other classical approaches, namely AIC, MDL and BIC. We conclude that the MML criterion is the preferred criterion. We then apply the segmentation method to financial time series data. 1 Introduction In this paper, we consider the problem of segmenting simple time series. We consider time series of the form: y t+1 = y t + ¯ j + ffl t where we are given N data points (y 1 : : : ; yN ) and we assume there are C + 1 segments (j 2 f0; : : : Cg), and that each ffl t is Gaussian with mean zero and variance oe 2 j . We wish to estimate  the number of segments, C + 1,  the segment boundaries, fv 1 ; : :...
MML and Bayesianism: Similarities and Differences (Introduction to Minimum Encoding Inference  Part II)
, 1994
"... This paper continues the introduction to minimum encoding inference given by Oliver and Hand. This series of papers were written with the objective of providing an introduction to this area for statisticians. We examine the relationship between Bayesianism and Minimum Message Length (MML) inference. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper continues the introduction to minimum encoding inference given by Oliver and Hand. This series of papers were written with the objective of providing an introduction to this area for statisticians. We examine the relationship between Bayesianism and Minimum Message Length (MML) inference. We argue that MML augments Bayesian methods by providing a sound Bayesian method for point estimation which is invariant under nonlinear transformations. We explore the issues of invariance of estimators under nonlinear transformations, the role of the Fisher Information matrix in MML inference, and the apparent similarity between MML and the adoption of a Jeffreys' Prior. We then compare MML to an approximate method of Bayesian Model Class Selection. Despite apparent similarities in their expressions, the properties of the two approaches can be different.