Results 1  10
of
17
Bounds for entropy and divergence for distributions over a twoelement set
 J. Ineq. Pure & Appl. Math
"... set ..."
Clustering with modellevel constraints
 SDM Conference
, 2005
"... In this paper we describe a systematic approach to uncovering multiple clusterings underlying a dataset. In contrast to previous approaches, the proposed method uses information about structures that are not desired and consequently is very useful in an exploratory datamining setting. Specifically, ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In this paper we describe a systematic approach to uncovering multiple clusterings underlying a dataset. In contrast to previous approaches, the proposed method uses information about structures that are not desired and consequently is very useful in an exploratory datamining setting. Specifically, the problem is formulated as constrained modelbased clustering where the constraints are placed at a modellevel. Two variants of an EM algorithm, for this constrained model, are derived. The performance of both variants is compared against a stateoftheart information bottleneck algorithm on both synthetic and real datasets. 1
Entropy and Equilibrium via Games of Complexity
"... It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical pri ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy (qentropy) and Kaniadakis entropy (κentropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a twoparameter family of entropy functions.
Quantifying information leakage in process calculi
 Proceedings of ICALP’06. Volume 4052 of Lecture Notes in Computer Science
, 2006
"... Building on simple informationtheoretic concepts, we study two quantitative models of information leakage in the picalculus. The first model presupposes an attacker with an essentially unlimited computational power. The resulting notion of absolute leakage, measured in bits, is in agreement with s ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Building on simple informationtheoretic concepts, we study two quantitative models of information leakage in the picalculus. The first model presupposes an attacker with an essentially unlimited computational power. The resulting notion of absolute leakage, measured in bits, is in agreement with secrecy as defined by Abadi and Gordon: a process has an absolute leakage of zero precisely when it satisfies secrecy. The second model assumes a restricted observation scenario, inspired by the testing equivalence framework, where the attacker can only conduct repeated successorfailure experiments on processes. Moreover, each experiment has a cost in terms of communication effort. The resulting notion of leakage rate, measured in bits per action, is in agreement with the first model: the maximum amount of information that can be extracted by repeated experiments coincides with the absolute leakage A of the process. Moreover, the overall extraction cost is at least A/R, where R is the rate of the process. The compositionality properties of the two models are also investigated.
Rényi Extrapolation of Shannon Entropy
 Open Sys., Inf. Dyn. 2003
"... Abstract. Relations between Shannon entropy and Rényi entropies of integer order are discussed. For any N–point discrete probability distribution for which the Rényi entropies of order two and three are known, we provide an lower and an upper bound for the Shannon entropy. The average of both bounds ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. Relations between Shannon entropy and Rényi entropies of integer order are discussed. For any N–point discrete probability distribution for which the Rényi entropies of order two and three are known, we provide an lower and an upper bound for the Shannon entropy. The average of both bounds provide an explicit extrapolation for this quantity. These results imply relations between the von Neumann entropy of a mixed quantum state, its linear entropy and traces.
On the BahadurEfficient Testing of Uniformity by means of the Entropy
, 2007
"... This paper compares the power divergence statistics of orders> 1 with the information divergence statistic in the problem of testing the uniformity of a distribution. In this problem the information divergence statistic is equivalent to the entropy statistic. Extending some previously established re ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper compares the power divergence statistics of orders> 1 with the information divergence statistic in the problem of testing the uniformity of a distribution. In this problem the information divergence statistic is equivalent to the entropy statistic. Extending some previously established results about information diagrams, it is proved that the information divergence statistic in this problem is more efficient in the Bahadur sense than any power divergence statistic of order> 1: This means that the entropy provides in this sense the most efficient way of characterizing the uniformity of a distribution.
On the Bahaduref cient testing of uniformity by means of the entropy
 IEEE Trans. Inform Theory
, 2008
"... Abstract — This paper compares the power divergence statistics of orders> 1 with the information divergence statistic in the problem of testing the uniformity of a distribution. In this problem the information divergence statistic is equivalent to the entropy statistic. Extending some previously est ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract — This paper compares the power divergence statistics of orders> 1 with the information divergence statistic in the problem of testing the uniformity of a distribution. In this problem the information divergence statistic is equivalent to the entropy statistic. Extending some previously established results about information diagrams, it is proved that the information divergence statistic in this problem is more ef cient in the Bahadur sense than any power divergence statistic of order> 1: This means that the entropy provides in this sense the most ef cient way of characterizing the uniformity of a distribution. Index Terms — Bahadur ef ciency, entropy, goodnessof t, index of coincidence, information diagram, power divergences.
Nonredundant clustering
, 2005
"... Data mining and knowledge discovery attempt to reveal concepts, patterns, relationships, and structures of interest in data. Typically, data may have many such structures. Most existing data mining techniques allow the user little say in which structure will be returned from the search. Those techn ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Data mining and knowledge discovery attempt to reveal concepts, patterns, relationships, and structures of interest in data. Typically, data may have many such structures. Most existing data mining techniques allow the user little say in which structure will be returned from the search. Those techniques which do allow the user control over the search typically require supervised information in the form of knowledge about a target solution. In the spirit of exploratory data mining, we consider the setting where the user does not have information about a target solution. Instead we suppose the user can provide information about solutions which are not desired. These undesired solutions may be previously obtained from data mining algorithms, or they may be known to the user a priori. The goal is then to discover novel structure in the dataset which is not redundant with respect to the known structure. Techniques should guide the search away from this known structure and towards novel, interesting structures. We describe and formally define the task of nonredundant clustering. Three different algorithmic approaches are derived for nonredundant clustering. Their performance is experimentally evaluated on data sets containing multiple clusterings. We explore how these techniques may be extended to systematically enumerate clusterings in a data set. Finally, we also investigate whether nonredundant approaches may be incorporated to enhance stateoftheart supervised techniques.
Efficiency of entropy testing
"... Recently it was shown that Shannon entropy is more Bahadur efficient than any RÃ©nyi entropy of order> 1: In this paper we shall show that relative Bahadur efficiency between any two RÃ©nyi entropies of orders 2]0; 1] is 1 when the relative Bahadur efficiency is defined according to [1]. Despite the ..."
Abstract
 Add to MetaCart
Recently it was shown that Shannon entropy is more Bahadur efficient than any RÃ©nyi entropy of order> 1: In this paper we shall show that relative Bahadur efficiency between any two RÃ©nyi entropies of orders 2]0; 1] is 1 when the relative Bahadur efficiency is defined according to [1]. Despite the fact that the relative Bahadur efficiency is 1 it is shown that in a certain sense Shannon entropy is more efficient than RÃ©nyi entropy for 2]0; 1 [ : This indicates that the definition of relative efficiency given in [1] does not fully capture the notion of efficiency.