Results 1 
5 of
5
Inference in Hybrid Bayesian Networks with Mixtures of Truncated Exponentials
, 2003
"... Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving hybrid Bayesian networks. Any probability density function can be approximated with an MTE potential, which can always by marginalized in closed form. This allows propagation to be done exactly us ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving hybrid Bayesian networks. Any probability density function can be approximated with an MTE potential, which can always by marginalized in closed form. This allows propagation to be done exactly using the ShenoyShafer architecture for computing marginals, with no restrictions on the construction of a join tree.
Conditional Density Estimation with Class Probability Estimators
"... Abstract. Many regression schemes deliver a point estimate only, but often it is useful or even essential to quantify the uncertainty inherent in a prediction. If a conditional density estimate is available, then prediction intervals can be derived from it. In this paper we compare three techniques ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. Many regression schemes deliver a point estimate only, but often it is useful or even essential to quantify the uncertainty inherent in a prediction. If a conditional density estimate is available, then prediction intervals can be derived from it. In this paper we compare three techniques for computing conditional density estimates using a class probability estimator, where this estimator is applied to the discretized target variable and used to derive instance weights for an underlying univariate density estimator; this yields a conditional density estimate. The three density estimators we compare are: a histogram estimator that has been used previously in this context, a normal density estimator, and a kernel estimator. In our experiments, the latter two deliver better performance, both in terms of crossvalidated loglikelihood and in terms of quality of the resulting prediction intervals. The empirical coverage of the intervals is close to the desired confidence level in most cases. We also include results for point estimation, as well as a comparison to Gaussian process regression and nonparametric quantile estimation. 1
Inference in Hybrid Bayesian Networks with Mixtures of Truncated Exponentials
"... Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated with an MTE potential, which can always be marginalized in closed form. This allows propagation to be done exactly ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated with an MTE potential, which can always be marginalized in closed form. This allows propagation to be done exactly using the ShenoyShafer architecture for computing marginals, with no restrictions on the construction of a join tree. This paper presents MTE potentials that approximate an arbitrary normal PDF with any mean and a positive variance. The properties of these MTE potentials are presented, along with examples that demonstrate their use in solving hybrid Bayesian networks. Assuming that the joint density exists, MTE potentials can be used for inference in hybrid Bayesian networks that do not fit the restrictive assumptions of the conditional linear Gaussian (CLG) model, such as networks containing discrete nodes with continuous parents. Key words: Hybrid Bayesian networks, mixtures of truncated exponentials, ShenoyShafer architecture, conditional linear Gaussian models. 1
Online Estimation of Discrete Densities using Classifier Chains
"... Abstract. We propose an approach to estimate a discrete joint density online, that is, the algorithm is only provided the current example, its current estimate, and a limited amount of memory. To design an online estimator for discrete densities, we use classifier chains to model dependencies among ..."
Abstract
 Add to MetaCart
Abstract. We propose an approach to estimate a discrete joint density online, that is, the algorithm is only provided the current example, its current estimate, and a limited amount of memory. To design an online estimator for discrete densities, we use classifier chains to model dependencies among features. Each classifier in the chain estimates the probability of one particular feature. Because a single chain may not provide a reliable estimate, we also consider ensembles of classifier chains. Our experiments on synthetic data show that the approach is feasible and the estimated densities approach the true, known distribution with increasing amounts of data. 1
Online Estimation of Discrete Densities
"... Abstractâ€”We address the problem of estimating a discrete joint density online, that is, the algorithm is only provided the current example and its current estimate. The proposed online estimator of discrete densities, EDDO (Estimation of Discrete Densities Online), uses classifier chains to model de ..."
Abstract
 Add to MetaCart
Abstractâ€”We address the problem of estimating a discrete joint density online, that is, the algorithm is only provided the current example and its current estimate. The proposed online estimator of discrete densities, EDDO (Estimation of Discrete Densities Online), uses classifier chains to model dependencies among features. Each classifier in the chain estimates the probability of one particular feature. Because a single chain may not provide a reliable estimate, we also consider ensembles of classifier chains and ensembles of weighted classifier chains. For all density estimators, we provide consistency proofs and propose algorithms to perform certain inference tasks. The empirical evaluation of the estimators is conducted in several experiments and on data sets of up to several million instances: We compare them to density estimates computed from Bayesian structure learners, evaluate them under the influence of noise, measure their ability to deal with concept drift, and measure the runtime performance. Our experiments demonstrate that, even though designed to work online, EDDO delivers estimators of competitive accuracy compared to batch Bayesian structure learners and batch variants of EDDO. I.