Results 11  20
of
59
MDL Convergence Speed for Bernoulli Sequences
, 2006
"... The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We
Convergence of Discrete MDL for Sequential Prediction
, 2004
"... We study the properties of the Minimum Description Length principle for sequence prediction, considering a twopart MDL estimator which is chosen from a countable class of models. This applies in particular to the important case of universal sequence prediction, where the model class corresponds to ..."
Abstract
 Add to MetaCart
characterizing the convergence speed for MDL predictions is exponentially larger as compared to Bayes mixtures. We observe that there are at least three different ways of using MDL for prediction. One of these has worse prediction properties, for which predictions only converge if the MDL estimator stabilizes
MDL convergence speed for Bernoulli sequences
 Statistics and Computing
, 2006
"... The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying ..."
Abstract
 Add to MetaCart
convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We
Efficient approximations for the marginal likelihood of Bayesian networks with hidden variables
 Machine Learning
, 1997
"... We discuss Bayesian methods for learning Bayesian networks when data sets are incomplete. In particular, we examine asymptotic approximations for the marginal likelihood of incomplete data given a Bayesian network. We consider the Laplace approximation and the less accurate but more efficient BIC/MD ..."
Abstract

Cited by 194 (12 self)
 Add to MetaCart
is the most accurate. In experiments using synthetic data generated from discrete naiveBayes models having a hidden root node, we find that (1) the BIC/MDL measure is the least accurate, having a bias in favor of simple models, and (2) the Draper and CS measures are the most accurate. 1
Recent Results in Universal and NonUniversal Induction
, 2006
"... We present and relate recent results in prediction based on countable classes of either probability (semi)distributions or base predictors. Learning by Bayes, MDL, and stochastic model selection will be considered as ..."
Abstract
 Add to MetaCart
We present and relate recent results in prediction based on countable classes of either probability (semi)distributions or base predictors. Learning by Bayes, MDL, and stochastic model selection will be considered as
On the Convergence Speed of MDL Predictions for Bernoulli Sequences
, 2004
"... We consider the Minimum Description Length principle for online sequence prediction. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is bounded, implying convergence with probability one, and (b) it a ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
) it additionally specifies a rate of convergence. Generally, for MDL only exponential loss bounds hold, as opposed to the linear bounds for a Bayes mixture. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable
On the Convergence Speed of MDL Predictions for Bernoulli Sequences
, 2004
"... We consider the Minimum Description Length principle for online sequence prediction. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is bounded, implying convergence with probability one, and (b) it a ..."
Abstract
 Add to MetaCart
) it additionally specifies a rate of convergence. Generally, for MDL only exponential loss bounds hold, as opposed to the linear bounds for a Bayes mixture. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable
Fundamental Research for Knowledge Federation
"... We present and relate recent results in prediction based on countable classes of either probability (semi)distributions or base predictors. Learning by Bayes, MDL, and stochastic model selection will be considered as instances of the first category. In particular, we will show how analog assertions ..."
Abstract
 Add to MetaCart
We present and relate recent results in prediction based on countable classes of either probability (semi)distributions or base predictors. Learning by Bayes, MDL, and stochastic model selection will be considered as instances of the first category. In particular, we will show how analog
Note MDL = Minimum Detection LimitAcute Toxicity in Animals
"... Assessment (OEHHA) is required to develop guidelines for conducting health risk assessments under the Air Toxics Hot Spots Program (Health and Safety Code Section 44360 (b) (2)). • Consideration of possible differential effects on the health of infants, children and other sensitive subpopulations is ..."
Abstract
 Add to MetaCart
human sperm cells and increase ovarian atrophy in mice. Table 1. 1,3Butadiene Air Sampling in the San Francisco Bay Area (BAAQMD, 2008)
Learning Optimal Augmented Bayes Networks
"... Naive Bayes is a simple Bayesian classifier with strong independence assumptions among the attributes. This classifier, despite its strong independence assumptions, often performs well in practice. It is believed that relaxing the independence assumptions of a naive Bayes classifier may improve the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
the possibilities of adding augmenting arcs between attributes of a Naive Bayes classifier. Friedman, Geiger and Goldszmidt define the TAN structure in which the augmenting arcs form a tree on the attributes, and present a polynomial time algorithm that learns an optimal TAN with respect to MDL score. Keogh
Results 11  20
of
59