Results 1  10
of
47
On the Foundations of Quantitative Information Flow
"... Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “sma ..."
Abstract

Cited by 118 (10 self)
 Add to MetaCart
(Show Context)
Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “small ” leaks that are necessary in practice. The emerging consensus is that quantitative information flow should be founded on the concepts of Shannon entropy and mutual information.Butauseful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks x bits of secret information, then x should be useful in calculating bounds on the resulting threat. In this paper, we focus on the threat that an attack will allow the secret to be guessed correctly in one try. With respect to this threat model, we argue that the consensus definitions actually fail to give good security guarantees—the problem is that a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed. We then explore an alternative foundation based on a concept of vulnerability (closely related to Bayes risk) and which measures uncertainty using Rényi’s minentropy, rather than Shannon entropy. 1
Belief in information flow
 In Proc. 18th IEEE Computer Security Foundations Workshop
, 2005
"... Information leakage traditionally has been defined to occur when uncertainty about secret data is reduced. This uncertaintybased approach is inadequate for measuring information flow when an attacker is making assumptions about secret inputs and these assumptions might be incorrect; such attacker b ..."
Abstract

Cited by 74 (11 self)
 Add to MetaCart
(Show Context)
Information leakage traditionally has been defined to occur when uncertainty about secret data is reduced. This uncertaintybased approach is inadequate for measuring information flow when an attacker is making assumptions about secret inputs and these assumptions might be incorrect; such attacker beliefs are an unavoidable aspect of any satisfactory definition of leakage. To reason about information flow based on beliefs, a model is developed that describes how attacker beliefs change due to the attacker’s observation of the execution of a probabilistic (or deterministic) program. The model leads to a new metric for quantitative information flow that measures accuracy rather than uncertainty of beliefs. 1.
Quantifying information flow with beliefs
 Cornell University
, 2006
"... To reason about information flow, a new model is developed that describes how attacker beliefs change due to the attacker’s observation of the execution of a probabilistic (or deterministic) program. The model enables compositional reasoning about information flow from attacks involving sequences of ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
To reason about information flow, a new model is developed that describes how attacker beliefs change due to the attacker’s observation of the execution of a probabilistic (or deterministic) program. The model enables compositional reasoning about information flow from attacks involving sequences of interactions. The model also supports a new metric for quantitative information flow that measures accuracy of an attacker’s beliefs. Applying this new metric reveals inadequacies of traditional information flow metrics, which are based on reduction of uncertainty. However, the new metric is sufficiently general that it can be instantiated to measure either accuracy or uncertainty. The new metric can also be used to reason about misinformation; deterministic programs are shown to be incapable of producing misinformation. Additionally, programs in which nondeterministic choices are made by insiders, who collude with attackers, can be analyzed. 1
A Provably Secure And Efficient Countermeasure Against Timing Attacks
"... We show that the amount of information about the key that an unknownmessage attacker can extract from a deterministic sidechannel is bounded from above by Olog 2 (n + 1) bits, where n is the number of sidechannel measurements and O is the set of possible observations. We use this bound to deriv ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We show that the amount of information about the key that an unknownmessage attacker can extract from a deterministic sidechannel is bounded from above by Olog 2 (n + 1) bits, where n is the number of sidechannel measurements and O is the set of possible observations. We use this bound to derive a novel countermeasure against timing attacks, where the strength of the security guarantee can be freely traded for the resulting performance penalty. We give algorithms that efficiently and optimally adjust this tradeoff for given constraints on the sidechannel leakage or on the efficiency of the cryptosystem. Finally, we perform a casestudy that shows that applying our countermeasure leads to implementations with minor performance overhead and formal security guarantees. 1.
Quantification of Integrity
"... Two informationflow integrity measures are introduced: contamination and suppression. The former is dual to informationflow confidentiality, and the latter is analogous to the standard model of channel reliability from information theory. The relationship between quantitative integrity, confidenti ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
Two informationflow integrity measures are introduced: contamination and suppression. The former is dual to informationflow confidentiality, and the latter is analogous to the standard model of channel reliability from information theory. The relationship between quantitative integrity, confidentiality, and database privacy is examined.
Computing the Leakage of InformationHiding Systems
"... Abstract. We address the problem of computing the information leakage of a system in an efficient way. We propose two methods: one based on reducing the problem to reachability, and the other based on techniques from quantitative counterexample generation. The second approach can be used either for ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
(Show Context)
Abstract. We address the problem of computing the information leakage of a system in an efficient way. We propose two methods: one based on reducing the problem to reachability, and the other based on techniques from quantitative counterexample generation. The second approach can be used either for exact or approximate computation, and provides feedback for debugging. These methods can be applied also in the case in which the input distribution is unknown. We then consider the interactive case and we point out that the definition of associated channel proposed in literature is not sound. We show however that the leakage can still be defined consistently, and that our methods extend smoothly. 1
Quantitative Information Flow – Verification Hardness and Possibilities
"... Abstract—Researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the Shannon entropy, the min entropy, the guessing entropy, and channel capacity. This paper investigates the hardness and possibilities of precisely checking and inf ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract—Researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the Shannon entropy, the min entropy, the guessing entropy, and channel capacity. This paper investigates the hardness and possibilities of precisely checking and inferring quantitative information flow according to such definitions. We prove that, even for just comparing two programs on which has the larger flow, none of the definitions is a ksafety property for any k, and therefore is not amenable to the selfcomposition technique that has been successfully applied to precisely checking noninterference. We also show a complexity theoretic gap with noninterference by proving that, for loopfree boolean programs whose noninterference is coNPcomplete, the comparison problem is #Phard for all of the definitions. For positive results, we show that universally quantifying the distribution in the comparison problem, that is, comparing two programs according to the entropy based definitions on which has the larger flow for all distributions, is a 2safety problem in general and is coNPcomplete when restricted for loopfree boolean programs. We prove this by showing that the problem is equivalent to a simple relation naturally expressing the fact that one program is more secure than the other. We prove that the relation also refines the channelcapacity based definition, and that it can be precisely checked via the selfcomposition as well as the “interleaved ” selfcomposition technique. I.
Quantifying Information Flow Using MinEntropy
"... Quantitative theories of information flow are of growing interest, due to the fundamental importance of protecting confidential information from improper disclosure, together with the unavoidability of “small” leaks in practical systems. But while it is tempting to measure leakage using classic inf ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Quantitative theories of information flow are of growing interest, due to the fundamental importance of protecting confidential information from improper disclosure, together with the unavoidability of “small” leaks in practical systems. But while it is tempting to measure leakage using classic informationtheoretic concepts like Shannon entropy and mutual information, these turn out not to provide very satisfactory security guarantees. As a result, several researchers have developed an alternative theory based on Rényi’s minentropy. In this theory, uncertainty is measured in terms of a random variable’s vulnerability to being guessed in one try by an adversary; note that this is the complement of the Bayes Risk. In this paper, we survey the main theory of minentropy leakage in deterministic and probabilistic systems, including comparisons with mutual information leakage, results on mincapacity, results on channels in cascade, and techniques for calculating minentropy leakage in systems.
Quantifying Timing Leaks and Cost Optimisation
"... We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable est ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable estimate of the timing leakage and use this estimate for cost optimisation.