## Univariate Polynomial Inference by Monte Carlo Message Length Approximation (2002)

### Cached

### Download Links

- [www.csse.monash.edu.au]
- [www.cs.monash.edu.au]
- [www.csse.monash.edu.au]
- [www.csse.monash.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | in Int. Conf. Machine Learning |

Citations: | 11 - 5 self |

### BibTeX

@INPROCEEDINGS{Fitzgibbon02univariatepolynomial,

author = {Leigh J. Fitzgibbon and David L. Dowe and Lloyd Allison},

title = {Univariate Polynomial Inference by Monte Carlo Message Length Approximation},

booktitle = {in Int. Conf. Machine Learning},

year = {2002},

pages = {147--154},

publisher = {Morgan Kaufmann}

}

### OpenURL

### Abstract

We apply the Message from Monte Carlo (MMC) algorithm to inference of univariate polynomials. MMC is an algorithm for point estimation from a Bayesian posterior sample.

### Citations

10328 | The Nature of Statistical Learning Theory
- Vapnik
- 1995
(Show Context)
Citation Context ...istributed noise by [8]. It was compared against Generalised Cross-Validation (GCV), Finite Prediction Error (FPE), Schwartz’s Criterion (SCH), and VC Dimension and Structural Risk Minimisation (SRM=-=) [7]-=- over a set of polynomial and nonpolynomial target functions, and for a range of noise levels. The criterion used to judge the performance of the different methods was Squared Prediction Error (SPE). ... |

978 | Reversible Jump Markov Chain Monte Carlo Computation and Bayesian Model Determination.Biometrika
- Green
- 1995
(Show Context)
Citation Context ...conjugacy): IGamma(0.0001, 0.0001), and a geometric prior on the order of the model: Geometric(0.9). 4. Polynomial Sampler To sample from the posterior we use Green’s reversible jump MCMC methodolog=-=y [4]-=-. Let kmax denote the upper bound on the order of the polynomials under consideration. The sampler consists of three types of moves: birth to a higher dimension, death to a lower dimension and move in... |

661 |
Markov Chain Monte Carlo in Practice
- Richardson, Spiegelhalter, et al.
- 1996
(Show Context)
Citation Context ...rom the posterior distribution and a function for computing the Kullback-Leibler distance. It can therefore be viewed as a posterior sampling postprocessing algorithm. Markov Chain Monte Carlo (MCMC) =-=[3]-=- methods can be used to do the sampling. 3. Polynomial Model We now describe the polynomial model and priors that we use in the experimental evaluation. The likelihood function for the n observations ... |

379 |
Bayesian inference in econometric models using Monte Carlo integration. Econometrica 57 1317–1339. MR1035115
- Geweke
- 1989
(Show Context)
Citation Context ...wever, we are sampling from the posterior because we expect the optimal uncertainty region to consist of models with high posterior probability. We use the posterior as an importance sampling density =-=[2] a-=-nd we weight the sample so that it is distributed as if being generated from the prior. Converting the MMLD message length expression into its equivalent (unnormalised) probability (by taking e −Mes... |

327 |
An information measure for classification
- Wallace, Boulton
- 1968
(Show Context)
Citation Context ... message length, which gives an objective means to compare hypotheses. The message length is a quantification of the trade-off between model complexity and goodness of fit that was first described by =-=[10]-=-. Since their seminal paper various approximations and derivations have appeared under the veil of Minimum Message Length (MML) [12, 11] or Minimum Description Length (MDL) [6]. MML87 [12] has recentl... |

110 | Minimum message length and Kolmogorov complexity
- Wallace, Dowe
- 1999
(Show Context)
Citation Context ...en model complexity and goodness of fit that was first described by [10]. Since their seminal paper various approximations and derivations have appeared under the veil of Minimum Message Length (MML) =-=[12, 11]-=- or Minimum Description Length (MDL) [6]. MML87 [12] has recently been applied to the problem of model selection in univariate polynomial regression with normally distributed noise by [8]. It was comp... |

61 | Hypothesis selection and testing by the MDL principle
- Rissanen
- 1999
(Show Context)
Citation Context ...was first described by [10]. Since their seminal paper various approximations and derivations have appeared under the veil of Minimum Message Length (MML) [12, 11] or Minimum Description Length (MDL) =-=[6]-=-. MML87 [12] has recently been applied to the problem of model selection in univariate polynomial regression with normally distributed noise by [8]. It was compared against Generalised Cross-Validatio... |

7 |
Estimation and inference by compact encoding (with discussion
- Wallace, Freeman
- 1987
(Show Context)
Citation Context ... between models in the region and the associated point estimate is small (using Wallace’s FSMML Boundary Rule). We compare the MMC algorithm’s point estimation performance with Minimum Message Len=-=gth [12]-=- and Structural Risk Minimisation on a set of ten polynomial and nonpolynomial functions with Gaussian noise. The orthonormal polynomial parameters are sampled using reversible jump Markov chain Monte... |

5 |
Message from Monte Carlo
- Fitzgibbon, Dowe, et al.
- 2002
(Show Context)
Citation Context ... behind. The other methods - GCV, FPE and SCH - had squared prediction error in the order of 100 times that of MML and SRM. The Message from Monte Carlo (MMC) algorithm has been recently described in =-=[1]-=-. It uses the posterior as an importance sampling density to implicitly define uncertainty regions, for which message lengths can be approximated using Monte Carlo methods. Here, we apply MMC to univa... |

5 |
A Note on the Comparison of Polynomial Selection Methods
- Viswanathan, Wallace
- 1999
(Show Context)
Citation Context ...h (MML) [12, 11] or Minimum Description Length (MDL) [6]. MML87 [12] has recently been applied to the problem of model selection in univariate polynomial regression with normally distributed noise by =-=[8]. -=-It was compared against Generalised Cross-Validation (GCV), Finite Prediction Error (FPE), Schwartz’s Criterion (SCH), and VC Dimension and Structural Risk Minimisation (SRM) [7] over a set of polyn... |

4 |
Improved approximations in MML. Honours thesis
- Lam
- 2000
(Show Context)
Citation Context ...elements from the parameter space (i.e., R ⊆ Θ). The parameter space can be a union of subspaces of different dimension. The MMC algorithm uses Dowe’s MMLD message length approximation - describe=-=d in [5]-=- - to calculate message lengths. For some arbitrary set of models from the parameter space, R, MMLD approximates the length of the first part of the message as the negative log integral of the prior o... |

2 | PAKDD-98 Tutorial: Data Mining - Wallace - 1998 |