Results 1 
2 of
2
An Invariant Bayesian Model Selection Principle for Gaussian Data
, 2004
"... We develop a code length principle which is invariant to the choice of parameterization on the model distributions. An invariant approximation formula for easy computation of the marginal distribution is provided for gaussian likelihood models. We provide invariant estimators of the model parameters ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We develop a code length principle which is invariant to the choice of parameterization on the model distributions. An invariant approximation formula for easy computation of the marginal distribution is provided for gaussian likelihood models. We provide invariant estimators of the model parameters and formulate conditions under which these estimators are essentially posteriori unbiased for gaussian models. An upper bound on the coarseness of discretization on the model parameters is deduced. We introduce a discrimination measure between probability distributions and use it to construct probability distributions on model classes. The total code length is shown to be closely related to the NML code length of Rissanen when choosing Jeffreys prior distribution on the model parameters together with a uniform prior distribution on the model classes. Our model selection principle is applied to a gaussian estimation problem for data in a wavelet representation and its performance is tested and compared to alternative waveletbased estimation methods in numerical experiments.
unknown title
"... An invariant bayesian model selection principle for gaussian data in a sparse representation Abstract — We develop a codelength principle which is invariant to the choice of parameterization on the model distributions, that is the codelength remains the same under smooth transformations on the likel ..."
Abstract
 Add to MetaCart
(Show Context)
An invariant bayesian model selection principle for gaussian data in a sparse representation Abstract — We develop a codelength principle which is invariant to the choice of parameterization on the model distributions, that is the codelength remains the same under smooth transformations on the likelihood parameters. An invariant approximation formula for easy computation of the marginal distribution is provided for gaussian likelihood models. We provide invariant estimators of the model parameters and formulate conditions under which these estimators are essentially posteriori unbiased for gaussian models. An upper bound on the coarseness of discretization on the model parameters is deduced. We introduce a discrimination measure between probability distributions and use it to construct probability distributions on model classes and show how this may induce an additional codelength term k 4 log 2 k for a kparameter model. The total codelength is shown to be closely related to the NML codelength of Rissanen when choosing Jeffreys prior distribution on the model parameters together with a uniform prior distribution on the model classes. Our model selection principle is applied to a gaussian estimation problem for data in a wavelet representation and its performance is tested and compared to alternative waveletbased estimation methods in numerical experiments. Index Terms — compression, denoising, generalized gaussian distribution, invariance, laplace approximation, MDL, MML,