Results 1 
5 of
5
Divergence measures based on the Shannon entropy
 IEEE Transactions on Information theory
, 1991
"... AbstractA new class of informationtheoretic divergence measures based on the Shannon entropy is introduced. Unlike the wellknown Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, ..."
Abstract

Cited by 425 (0 self)
 Add to MetaCart
AbstractA new class of informationtheoretic divergence measures based on the Shannon entropy is introduced. Unlike the wellknown Kullback divergences, the new measures do not require the condition of absolute continuity to be satisfied by the probability distributions involved. More importantly, their close relationship with the variational distance and the probability of misclassification error are established in terms of bounds. These bounds are crucial in many applications of divergence measures. The new measures are also well characterized by the properties of nonnegativity, finiteness, semiboundedness, and boundedness. Index TermsDivergence, dissimilarity measure, discrimination information, entropy, probability of error bounds. I.
The Posterior Probability of Bayes Nets with Strong Dependences
 Soft Computing
, 1999
"... Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong substantial dependence. Good models map significant deviance from independence and neglect approximate independence or dependence weaker than a noise threshold. This intuition is applied to learning the structure of Bayes nets from data. We determine the conditional posterior probabilities of structures given that the degree of dependence at each of their nodes exceeds a critical noise level. Deviance from independence is measured by mutual information. Arc probabilities are determined by the amount of mutual information the neighbors contribute to a node, is greater than a critical minimum deviance from independence. A Ø 2 approximation for the probability density function of mutual info...
92 An entropybased learning algorithm of Bayesian conditional trees
"... This article offers a modification of Chow and Liu’s learning algorithm in the context of handwritten digit recognition. The modified algorithm directs the user to group digits into several classes consisting of digits that are hard to distinguish and then constructing an optimal conditional tree re ..."
Abstract
 Add to MetaCart
This article offers a modification of Chow and Liu’s learning algorithm in the context of handwritten digit recognition. The modified algorithm directs the user to group digits into several classes consisting of digits that are hard to distinguish and then constructing an optimal conditional tree representation for each class of digits instead of for each single digit as done by Chow and Liu (1968). Advantages and extensions of the new method are discussed. Related works of Wong and Wang (1977) and Wong and Poon (1989) which offer a different entropybased learning algorithm are shown to rest on inappropriate assumptions. 1
Estimation of Probability Density Function by Dependence Tree Methods for Pattern Recognition Systems
"... Estimation of probability density function is inevitable in some engineering systems, specially in statistical pattern recognition systems. One category of methods applied for estimation is tree dependence methods, which could be classified under nonparametric estimation approaches. In this paper, w ..."
Abstract
 Add to MetaCart
Estimation of probability density function is inevitable in some engineering systems, specially in statistical pattern recognition systems. One category of methods applied for estimation is tree dependence methods, which could be classified under nonparametric estimation approaches. In this paper, we surveyed important tree dependence methods. To do this, we pursued a mathematically rigorous manner and in this respect, many mathematical details is added to what is available in literature 1. Furthermore, connection of studied tree dependence methods is addressed completely, which can be a source of valuable information for future works in this area.