Results 1 - 10
of
272
Towards an Information Theoretic Metric for Anonymity
, 2002
"... In this paper we look closely at the popular metric of anonymity, the anonymity set, and point out a number of problems associated with it. We then propose an alternative information theoretic measure of anonymity which takes into account the probabilities of users sending and receiving the messages ..."
Abstract
-
Cited by 311 (18 self)
- Add to MetaCart
In this paper we look closely at the popular metric of anonymity, the anonymity set, and point out a number of problems associated with it. We then propose an alternative information theoretic measure of anonymity which takes into account the probabilities of users sending and receiving the messages and show how to calculate it for a message in a standard mix-based anonymity system. We also use our metric to compare a pool mix to a traditional threshold mix, which was impossible using anonymity sets. We also show how the maximum route length restriction which exists in some fielded anonymity systems can lead to the attacker performing more powerful traffic analysis. Finally, we discuss open problems and future work on anonymity measurements.
Robust De-anonymization of Large Sparse Datasets.
- IEEE Symposium on Security and Privacy
, 2008
"... ..."
Privacy in Electronic Commerce and the Economics of Immediate Gratification
, 2004
"... Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unreali ..."
Abstract
-
Cited by 128 (16 self)
- Add to MetaCart
(Show Context)
Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only `naïve' individuals but also `sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant.
Protecting location privacy through path confusion
- In SECURECOMM ’05: Proceedings of the First International Conference on Security and Privacy for Emerging Areas in Communications Networks
, 2005
"... We present a path perturbation algorithm which can maximize users ’ location privacy given a quality of service constraint. This work concentrates on a class of applications that continuously collect location samples from a large group of users, where just removing user identifiers from all samples ..."
Abstract
-
Cited by 108 (3 self)
- Add to MetaCart
(Show Context)
We present a path perturbation algorithm which can maximize users ’ location privacy given a quality of service constraint. This work concentrates on a class of applications that continuously collect location samples from a large group of users, where just removing user identifiers from all samples is insufficient because an adversary could use trajectory information to track paths and follow users’ footsteps home. The key idea underlying the perturbation algorithm is to cross paths in areas where at least two users meet. This increases the chances that an adversary would confuse the paths of different users. We first formulate this privacy problem as a constrained optimization problem and then develop heuristics for an efficient privacy algorithm. Using simulations with randomized movement models we verify that the algorithm improves privacy while minimizing the perturbation of location samples. 1
Anonymity and Information Hiding in Multiagent Systems
, 2003
"... We provide a framework for reasoning about information-hiding requirements in multiagent systems and for reasoning about anonymity in particular. Our framework employs the modal logic of knowledge within the context of the runs and systems framework, much in the spirit of our earlier work on secrecy ..."
Abstract
-
Cited by 94 (3 self)
- Add to MetaCart
We provide a framework for reasoning about information-hiding requirements in multiagent systems and for reasoning about anonymity in particular. Our framework employs the modal logic of knowledge within the context of the runs and systems framework, much in the spirit of our earlier work on secrecy [9]. We give several definitions of anonymity with respect to agents, actions, and observers in multiagent systems, and we relate our definitions of anonymity to other definitions of information hiding, such as secrecy. We also give probabilistic definitions of anonymity that are able to quantify an observer's uncertainty about the state of the system. Finally, we relate our definitions of anonymity to other formalizations of anonymity and information hiding, including definitions of anonymity in the process algebra CSP and definitions of information hiding using function views.
On the Economics of Anonymity
- Financial Cryptography. Springer-Verlag, LNCS 2742
, 2003
"... Decentralized anonymity infrastructures are still not in wide use today. While there are technical barriers to a secure robust design, our lack of understanding of the incentives to participate in such systems remains a major roadblock. Here we explore some reasons why anonymity systems are particul ..."
Abstract
-
Cited by 94 (24 self)
- Add to MetaCart
(Show Context)
Decentralized anonymity infrastructures are still not in wide use today. While there are technical barriers to a secure robust design, our lack of understanding of the incentives to participate in such systems remains a major roadblock. Here we explore some reasons why anonymity systems are particularly hard to deploy, enumerate the incentives to participate either as senders or also as nodes, and build a general model to describe the effects of these incentives. We then describe and justify some simplifying assumptions to make the model manageable, and compare optimal strategies for participants based on a variety of scenarios.
Anonymity protocols as noisy channels
- Information and Computation
, 2006
"... Abstract. We propose a framework in which anonymity protocols are interpreted as particular kinds of channels, and the degree of anonymity provided by the protocol as the converse of the channel’s capacity. We also investigate how the adversary can test the system to try to infer the user’s identity ..."
Abstract
-
Cited by 86 (27 self)
- Add to MetaCart
(Show Context)
Abstract. We propose a framework in which anonymity protocols are interpreted as particular kinds of channels, and the degree of anonymity provided by the protocol as the converse of the channel’s capacity. We also investigate how the adversary can test the system to try to infer the user’s identity, and we study how his probability of success depends on the characteristics of the channel. We then illustrate how various notions of anonymity can be expressed in this framework, and show the relation with some definitions of probabilistic anonymity in literature. 1
Anonymity Loves Company: Usability and the Network Effect
- In Proceedings of the Fifth Workshop on the Economics of Information Security (WEIS 2006
, 2006
"... Other chapters in this book have talked about how usability impacts security. One class of security software is anonymizing networks—overlay networks on the Internet that provide privacy by letting users transact (for example, fetch a web page or send an email) without revealing their communication ..."
Abstract
-
Cited by 65 (9 self)
- Add to MetaCart
(Show Context)
Other chapters in this book have talked about how usability impacts security. One class of security software is anonymizing networks—overlay networks on the Internet that provide privacy by letting users transact (for example, fetch a web page or send an email) without revealing their communication partners. In this chapter, we’ll focus on the network effects of usability on privacy and security: usability is a factor as before, but the size of the user base also becomes a factor. As we will see, in anonymizing networks, even if you were smart enough and had enough time to use every system perfectly, you would nevertheless be right to choose your system based in part on its usability for other users. 1 Usability for others impacts your security While security software is the product of developers, the security it provides is a collaboration between developers and users. It’s not enough to make software that can be used securely; software that is hard to use often suffers in its security as a result.
On the Anonymity of Periodic Location Samples
- In Proceedings of the Second International Conference on Security in Pervasive Computing
, 2005
"... Abstract. As Global Positioning System (GPS) receivers become a common feature in cell phones, personal digital assistants, and automobiles, there is a growing interest in tracking larger user populations, rather than individual users. Unfortunately, anonymous location samples do not fully solve the ..."
Abstract
-
Cited by 64 (5 self)
- Add to MetaCart
(Show Context)
Abstract. As Global Positioning System (GPS) receivers become a common feature in cell phones, personal digital assistants, and automobiles, there is a growing interest in tracking larger user populations, rather than individual users. Unfortunately, anonymous location samples do not fully solve the privacy problem. An adversary could link multiple samples (i.e., follow the footsteps) to accumulate path information and eventually identify a user. This paper reports on our ongoing work to analyze privacy risks in such applications. We observe that linking anonymous location samples is related to the data association problem in tracking systems. We then propose to use such tracking algorithms to characterize the level of privacy and to derive disclosure control algorithms. 1