Results 1  10
of
49
Robust Feature Selection by Mutual Information Distributions
 Proceedings of the 18th International Conference on Uncertainty in Artificial Intelligence (UAI2002
, 2002
"... Mutual information is widely used in artificial intelligence, in a descriptive way, to measure the stochastic dependence of discrete random variables. In order to address questions such as the reliability of the empirical value, one must consider sampletopopulation inferential approaches. This pap ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
Mutual information is widely used in artificial intelligence, in a descriptive way, to measure the stochastic dependence of discrete random variables. In order to address questions such as the reliability of the empirical value, one must consider sampletopopulation inferential approaches. This paper deals with the distribution of mutual information, as obtained in a Bayesian framework by a secondorder Dirichlet prior distribution. The exact analytical expression for the mean and an analytical approximation of the variance are reported. Asymptotic approximations of the distribution are proposed. The results are applied to the problem of selecting features for incremental learning and classification of the naive Bayes classifier. A fast, newly defined method is shown to outperform the traditional approach based on empirical mutual information on a number of real data sets. Finally, a theoretical development is reported that allows one to efficiently extend the above methods to incomplete samples in an easy and effective way.
Distribution of Mutual Information from Complete And Incomplete Data
 Computational Statistics and Data Analysis
, 2004
"... Mutual information is widely used, in a descriptive way, to measure the stochastic dependence of categorical random variables. In order to address questions such as the reliability of the descriptive value, one must consider sampletopopulation inferential approaches. This paper deals with the post ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
(Show Context)
Mutual information is widely used, in a descriptive way, to measure the stochastic dependence of categorical random variables. In order to address questions such as the reliability of the descriptive value, one must consider sampletopopulation inferential approaches. This paper deals with the posterior distribution of mutual information, as obtained in a Bayesian framework by a secondorder Dirichlet prior distribution. The exact analytical expression for the mean, and analytical approximations for the variance, skewness and kurtosis are derived. These approximations have a guaranteed accuracy level of the order O(n 3 ), where n is the sample size. Leading order approximations for the mean and the variance are derived in the case of incomplete samples. The derived analytical expressions allow the distribution of mutual information to be approximated reliably and quickly. In fact, the derived expressions can be computed with the same order of complexity needed for descriptive mutual information. This makes the distribution of mutual information become a concrete alternative to descriptive mutual information in many applications which would benefit from moving to the inductive side. Some of these prospective applications are discussed, and one of them, namely feature selection,isshowntoperform significantly better when inductive mutual information is used.
Statistical Measurement of Information Leakage
"... Abstract. Information theory provides a range of useful methods to analyse probability distributions and these techniques have been successfully applied to measure information flow and the loss of anonymity in secure systems. However, previous work has tended to assume that the exact probabilities o ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Information theory provides a range of useful methods to analyse probability distributions and these techniques have been successfully applied to measure information flow and the loss of anonymity in secure systems. However, previous work has tended to assume that the exact probabilities of every action are known, or that the system is nondeterministic. In this paper, we show that measures of information leakage based on mutual information and capacity can be calculated, automatically, from trial runs of a system alone. We find a confidence interval for this estimate based on the number of possible inputs, observations and samples. We have developed a tool to automatically perform this analysis and we demonstrate our method by analysing a Mixminon anonymous remailer node. 1
A Comprehensive Evaluation of Mutual Information Analysis Using a Fair Evaluation Framework
 Advances in Cryptology CRYPTO 2011, LNCS. Springer Berlin
, 2011
"... Abstract. The resistance of cryptographic implementations to side channel analysis is matter of considerable interest to those concerned with information security. It is particularly desirable to identify the attack methodology (e.g. di erential power analysis using correlation or distanceofmeans ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The resistance of cryptographic implementations to side channel analysis is matter of considerable interest to those concerned with information security. It is particularly desirable to identify the attack methodology (e.g. di erential power analysis using correlation or distanceofmeans as the distinguisher) able to produce the best results. Attempts to answer this question are complicated by the many and varied factors contributing to attack success: the device power consumption characteristics, an attacker's power model, the distinguisher by which measurements and model predictions are compared, the quality of the estimations, and so on. Previous work has delivered partial answers for certain restricted scenarios. In this paper we assess the e ectiveness of mutual information analysis within a generic and comprehensive evaluation framework. Complementary to existing work, we present several notions/characterisations of attack success, as well as a means of indicating the amount of data required by an attack. We are thus able to identify scenarios in which mutual information o ers performance advantages over other distinguishers. Furthermore we observe an interesting feature unique to the mutual information based distinguisher resembling a type of stochastic resonance, which could potentially enhance the e ectiveness of such attacks over other methods in certain noisy scenarios.
An approximation to the distribution of finite sample size mutual information estimates
 ICC
, 2004
"... Abstract — In this paper, the distribution of mutual information between two discrete random variables is approximated by means of a secondorder Taylor series expansion. Approximative expressions for the distribution of mutual information (MI) between independent random variables, conditional MI be ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, the distribution of mutual information between two discrete random variables is approximated by means of a secondorder Taylor series expansion. Approximative expressions for the distribution of mutual information (MI) between independent random variables, conditional MI between conditionally independent variables, and MI between (weakly) dependent random variables are derived. These distributions are functions of the available sample size and the number of realisations of the random variables only; knowledge of the variables ’ PMF is not required. The results are verified numerically for various cases. Exemplary application ideas in statistics and communications engineering are proposed. I.
A Fair Evaluation Framework for Comparing SideChannel Distinguishers
 Journal of Cryptographic Engineering
, 2011
"... Abstract. The ability to make meaningful comparisons between sidechannel distinguishers is important both to attackers seeking an optimal strategy and to designers wishing to secure a device against the strongest possible threat. The usual experimental approach requires the distinguishing vectors t ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The ability to make meaningful comparisons between sidechannel distinguishers is important both to attackers seeking an optimal strategy and to designers wishing to secure a device against the strongest possible threat. The usual experimental approach requires the distinguishing vectors to be estimated: outcomes do not fully represent the inherent theoretic capabilities of distinguishers and do not provide a basis for conclusive, likeforlike comparisons. This is particularly problematic in the case of mutual informationbased side channel analysis (MIA) which is notoriously sensitive to the choice of estimator. We propose an evaluation framework which captures those theoretic characteristics of attack distinguishers having the strongest bearing on an attacker’s general ability to estimate with practical success, thus enabling likeforlike comparisons between different distinguishers in various leakage scenarios. We apply our framework to an evaluation of MIA relative to its rather more wellestablished correlationbased predecessor and a proposed variant inspired by the KolmogorovSmirnov distance. Our analysis makes sense of the rift between the a priori reasoning in favour of MIA and the disappointing empirical findings of previous comparative studies, and moreover reveals several unprecedented features of the attack distinguishers in terms of their sensitivity to noise. It also explores—to our knowledge, for the first time—theoretic properties of neargeneric power models previously proposed (and experimentally verified) for use in attacks targeting injective functions. 1
Robust estimators under the Imprecise Dirichlet Model (extended version
, 2003
"... Walley’s Imprecise Dirichlet Model (IDM) for categorical data overcomes several fundamental problems which other approaches to uncertainty suffer from. Yet, to be useful in practice, one needs efficient ways for computing the imprecise=robust sets or intervals. The main objective of this work is to ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Walley’s Imprecise Dirichlet Model (IDM) for categorical data overcomes several fundamental problems which other approaches to uncertainty suffer from. Yet, to be useful in practice, one needs efficient ways for computing the imprecise=robust sets or intervals. The main objective of this work is to derive exact, conservative, and approximate, robust and credible interval estimates under the IDM for a large class of statistical estimators, including the entropy and mutual information.
Fast nonparametric Bayesian inference on infinite trees
 In Proc. 15th International Conference on Artificial Intelligence and Statistics (AISTATS2005
, 2005
"... Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate datadependent granularity. A Bayesian would assign a dataindependent prior probability to ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate datadependent granularity. A Bayesian would assign a dataindependent prior probability to “subdivide”, which leads to a prior over infinite(ly many) trees. We derive an exact, fast, and simple inference algorithm for such a prior, for the data evidence, the predictive distribution, the effective model dimension, and other quantities. 1
Bayesian and InformationTheoretic Tools for Neuroscience
 Schoolof Psychology, University of
, 2006
"... The overarching purpose of the studies presented in this report is the exploration of the uses of information theory and Bayesian inference applied to neural codes. Two approaches were taken: Starting from first principles, a coding mechanism is proposed, the results are compared to a biological neu ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
(Show Context)
The overarching purpose of the studies presented in this report is the exploration of the uses of information theory and Bayesian inference applied to neural codes. Two approaches were taken: Starting from first principles, a coding mechanism is proposed, the results are compared to a biological neural code. Secondly, tools from information theory are used to measure the information contained in a biological neural code. Chapter 3: The REC model proposed by Harpur and Prager [33] codes inputs into a sparse, factorial representation, maintaining reconstruction accuracy. Here I propose a modification of the REC model to determine the optimal network dimensionality. The resulting code for unfiltered natural images is accurate, highly sparse and a large fraction of the code elements show localized features. Furthermore, I propose an activation algorithm for the network that is faster and more accurate than a gradient descent based activation method. Moreover, it is demonstrated that asymmetric noise promotes sparseness. Chapter 4: A fast, exact alternative to Bayesian classification is introduced. Computational time is quadratic in both the number of observed data points and the number of degrees of freedom of the underlying model. As an example application, responses of single neurons from highlevel visual cortex (area STSa) to rapid sequences of complex visual stimuli are analyzed. Chapter 5: I present an exact Bayesian treatment of a simple, yet sufficiently general probability distribution model. The model complexity, exact values of the expectations of entropies and their variances can be computed with polynomial effort given the data. The expectation of the mutual information becomes thus available, too, and a strict upper bound on its variance. The resulting algorithm is first tested on artificial data. To that end, an information theoretic similarity measure is derived. Second, the algorithm is demonstrated to be useful in neuroscience by studying the information content of the neural responses analyzed in the previous chapter. It is shown that the information throughput of STS neurons is maximized for stimulus durations ≈ 60ms.
The Use of a Bayesian Confidence Propagation Neural Network in Pharmacovigilance
, 2003
"... entered into the main database due to varying degrees of erroneous data present can be searched,” This should read: “including the ability to search reports that, due to varying degrees of erroneous data present, have not been entered into the database.” On page 27 the second last line: “recalling ” ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
entered into the main database due to varying degrees of erroneous data present can be searched,” This should read: “including the ability to search reports that, due to varying degrees of erroneous data present, have not been entered into the database.” On page 27 the second last line: “recalling ” should read: “rechallenge” On page 36 section 2.2.2 The first paragraph should read: “Different methods of quantitative analysis using spontaneous reporting: a) Make use of periodic or continuous testing b) Are based on reporting rates (ie are linked to denominator data) or reporting numbers and c) examine the total number, or rate of increase (in single or multiple time periods) of reporting. Eight combinations of the above choices are therefore possible.” Pg 37 line 3 mentions “incidence rates”, this should read “reporting rates”. Pg 38 line 29 should read “interaction detection. ” rather than “interaction.”