Results 1  10
of
85
Verifying privacytype properties of electronic voting protocols
"... Electronic voting promises the possibility of a convenient, efficient and secure facility for recording and tallying votes in an election. Recently highlighted inadequacies of implemented systems have demonstrated the importance of formally verifying the underlying voting protocols. We study three p ..."
Abstract

Cited by 118 (37 self)
 Add to MetaCart
Electronic voting promises the possibility of a convenient, efficient and secure facility for recording and tallying votes in an election. Recently highlighted inadequacies of implemented systems have demonstrated the importance of formally verifying the underlying voting protocols. We study three privacytype properties of electronic voting protocols: in increasing order of strength, they are voteprivacy, receiptfreeness, and coercionresistance. We use the applied pi calculus, a formalism well adapted to modelling such protocols, which has the advantages of being based on wellunderstood concepts. The privacytype properties are expressed using observational equivalence and we show in accordance with intuition that coercionresistance implies receiptfreeness, which implies voteprivacy. We illustrate our definitions on three electronic voting protocols from the literature. Ideally, these three properties should hold even if the election officials are corrupt. However, protocols that were designed to satisfy receiptfreeness or coercionresistance may not do so in the presence of corrupt officials. Our model and definitions allow us to specify and easily change which authorities are supposed to be trustworthy.
On the Foundations of Quantitative Information Flow
"... Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “sma ..."
Abstract

Cited by 112 (10 self)
 Add to MetaCart
(Show Context)
Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “small ” leaks that are necessary in practice. The emerging consensus is that quantitative information flow should be founded on the concepts of Shannon entropy and mutual information.Butauseful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks x bits of secret information, then x should be useful in calculating bounds on the resulting threat. In this paper, we focus on the threat that an attack will allow the secret to be guessed correctly in one try. With respect to this threat model, we argue that the consensus definitions actually fail to give good security guarantees—the problem is that a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed. We then explore an alternative foundation based on a concept of vulnerability (closely related to Bayes risk) and which measures uncertainty using Rényi’s minentropy, rather than Shannon entropy. 1
J.P.: Quantifying location privacy
 In: IEEE Symposium on Security and Privacy
, 2011
"... Abstract. Mobile users expose their location to potentially untrusted entities by using locationbased services. Based on the frequency of location exposure in these applications, we divide them into two main types: Continuous and Sporadic. These two location exposure types lead to different threats ..."
Abstract

Cited by 69 (18 self)
 Add to MetaCart
(Show Context)
Abstract. Mobile users expose their location to potentially untrusted entities by using locationbased services. Based on the frequency of location exposure in these applications, we divide them into two main types: Continuous and Sporadic. These two location exposure types lead to different threats. For example, in the continuous case, the adversary can track users over time and space, whereas in the sporadic case, his focus is more on localizing users at certain points in time. We propose a systematic way to quantify users ’ location privacy by modeling both the locationbased applications and the locationprivacy preserving mechanisms (LPPMs), and by considering a welldefined adversary model. This framework enables us to customize the LPPMs to the employed locationbased application, in order to provide higher location privacy for the users. In this paper, we formalize localization attacks for the case of sporadic location exposure, using Bayesian inference for Hidden Markov Processes. We also quantify user location privacy with respect to the adversaries with two different forms of background knowledge: Those who only know the geographical distribution of users over the considered regions, and those who also know how users move between the regions (i.e., their mobility pattern). Using the LocationPrivacy Meter tool, we examine the effectiveness of the following techniques in increasing the expected error of the adversary in the localization attack: Location obfuscation and fake location injection mechanisms for anonymous traces. 1
On the Bayes Risk in InformationHiding Protocols ∗
"... Randomized protocols for hiding private information can be regarded as noisy channels in the informationtheoretic sense, and the inference of the concealed information can be regarded as a hypothesistesting problem. We consider the Bayesian approach to the problem, and investigate the probability ..."
Abstract

Cited by 31 (17 self)
 Add to MetaCart
(Show Context)
Randomized protocols for hiding private information can be regarded as noisy channels in the informationtheoretic sense, and the inference of the concealed information can be regarded as a hypothesistesting problem. We consider the Bayesian approach to the problem, and investigate the probability of error associated to the MAP (Maximum Aposteriori Probability) inference rule. Our main result is a constructive characterization of a convex base of the probability of error, which allows us to compute its maximum value (over all possible input distributions), and to identify upper bounds for it in terms of simple functions. As a side result, we are able to improve the HellmanRaviv and the SanthiVardy bounds expressed in terms of conditional entropy. We then discuss an application of our methodology to the Crowds protocol, and in particular we show how to compute the bounds on the probability that an adversary break anonymity. 1
Vulnerability bounds and leakage resilience of blinded cryptography under timing attacks
 in 2010 IEEE Computer Security Foundations
, 2010
"... Abstract—We establish formal bounds for the number of minentropy bits that can be extracted in a timing attack against a cryptosystem that is protected by blinding, the stateofthe art countermeasure against timing attacks. Compared with existing bounds, our bounds are both tighter and of greater ..."
Abstract

Cited by 29 (7 self)
 Add to MetaCart
(Show Context)
Abstract—We establish formal bounds for the number of minentropy bits that can be extracted in a timing attack against a cryptosystem that is protected by blinding, the stateofthe art countermeasure against timing attacks. Compared with existing bounds, our bounds are both tighter and of greater operational significance, in that they directly address the key’s oneguess vulnerability. Moreover, we show that any semantically secure publickey cryptosystem remains semantically secure in the presence of timing attacks, if the implementation is protected by blinding and bucketing. This result shows that, by considering (and justifying) more optimistic models of leakage than recent proposals for leakageresilient cryptosystems, one can achieve provable resistance against sidechannel attacks for standard cryptographic primitives. I.
Informationtheoretic bounds for differentially private mechanisms
 In 24rd IEEE Computer Security Foundations Symposium, CSF 2011. IEEE Computer Society, Los Alamitos
"... Abstract—There are two active and independent lines of research that aim at quantifying the amount of information that is disclosed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensu ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
Abstract—There are two active and independent lines of research that aim at quantifying the amount of information that is disclosed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensus guarantee used for privacypreserving data analysis. On the other hand, informationtheoretic notions of leakage are used for characterizing the confidentiality properties of programs in languagebased settings. The purpose of this article is to establish formal connections between both notions of confidentiality, and to compare them in terms of the security guarantees they deliver. We obtain the following results. First, we establish upper bounds for the leakage of every ɛdifferentially private mechanism in terms of ɛ and the size of the mechanism’s input domain. We achieve this by identifying and leveraging a connection to coding theory. Second, we construct a class of ɛdifferentially private channels whose leakage grows with the size of their input domains. Using these channels, we show that there cannot be domainsizeindependent bounds for the leakage of all ɛdifferentially private mechanisms. Moreover, we perform an empirical evaluation that shows that the leakage of these channels almost matches our theoretical upper bounds, demonstrating the accuracy of these bounds. Finally, we show that the question of providing optimal upper bounds for the leakage of ɛdifferentially private mechanisms in terms of rational functions of ɛ is in fact decidable.
Measuring anonymity with relative entropy
 In Proceedings of the 4th International Workshop on Formal Aspects in Security and Trust, volume 4691 of LNCS
, 2007
"... Abstract. Anonymity is the property of maintaining secret the identity of users performing a certain action. Anonymity protocols often use random mechanisms which can be described probabilistically. In this paper, we propose a probabilistic process calculus to describe protocols for ensuring anonymi ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Anonymity is the property of maintaining secret the identity of users performing a certain action. Anonymity protocols often use random mechanisms which can be described probabilistically. In this paper, we propose a probabilistic process calculus to describe protocols for ensuring anonymity, and we use the notion of relative entropy from information theory to measure the degree of anonymity these protocols can guarantee. Furthermore, we prove that the operators in the probabilistic process calculus are nonexpansive, with respect to this measuring method. We illustrate our approach by using the example of the Dining Cryptographers Problem. 1
Probability of Error in InformationHiding Protocols
 in "Proceedings of the 20th IEEE Computer Security Foundations Symposium (CSF20)", IEEE Computer Society
"... There are many bounds known in literature for the Bayes ’ risk. One of these is the equivocation bound, due to Rényi [22], which states that the probability of error is bound by the conditional entropy of the channel’s input given the output. Later, Hellman and Raviv improved this bound by half [13] ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
There are many bounds known in literature for the Bayes ’ risk. One of these is the equivocation bound, due to Rényi [22], which states that the probability of error is bound by the conditional entropy of the channel’s input given the output. Later, Hellman and Raviv improved this bound by half [13]. Recently, Santhi and Vardy have proposed a new bound, that depends exponentially on the (opposite of the) conditional entropy, and which considerably improves the HellmanRaviv bound in the case of multiinria00200957,
Computing the Leakage of InformationHiding Systems
"... Abstract. We address the problem of computing the information leakage of a system in an efficient way. We propose two methods: one based on reducing the problem to reachability, and the other based on techniques from quantitative counterexample generation. The second approach can be used either for ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
(Show Context)
Abstract. We address the problem of computing the information leakage of a system in an efficient way. We propose two methods: one based on reducing the problem to reachability, and the other based on techniques from quantitative counterexample generation. The second approach can be used either for exact or approximate computation, and provides feedback for debugging. These methods can be applied also in the case in which the input distribution is unknown. We then consider the interactive case and we point out that the definition of associated channel proposed in literature is not sound. We show however that the leakage can still be defined consistently, and that our methods extend smoothly. 1