Results 1 - 10
of
296
Evaluating 2-dnf formulas on ciphertexts
- In proceedings of TCC ’05, LNCS series
, 2005
"... Abstract. Let ψ be a 2-DNF formula on boolean variables x1,..., xn ∈ {0, 1}. We present a homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,..., xn. In other words, given the encryption of the bits x1,..., xn, anyone can create th ..."
Abstract
-
Cited by 231 (7 self)
- Add to MetaCart
(Show Context)
Abstract. Let ψ be a 2-DNF formula on boolean variables x1,..., xn ∈ {0, 1}. We present a homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,..., xn. In other words, given the encryption of the bits x1,..., xn, anyone can create the encryption of ψ(x1,..., xn). More generally, we can evaluate quadratic multi-variate polynomials on ciphertexts provided the resulting value falls within a small set. We present a number of applications of the system: 1. In a database of size n, the total communication in the basic step of the Kushilevitz-Ostrovsky PIR protocol is reduced from √ n to 3 √ n. 2. An efficient election system based on homomorphic encryption where voters do not need to include non-interactive zero knowledge proofs that their ballots are valid. The election system is proved secure without random oracles but still efficient. 3. A protocol for universally verifiable computation. 1
Privacy-preserving set operations
- in Advances in Cryptology - CRYPTO 2005, LNCS
, 2005
"... In many important applications, a collection of mutually distrustful parties must perform private computation over multisets. Each party’s input to the function is his private input multiset. In order to protect these private sets, the players perform privacy-preserving computation; that is, no part ..."
Abstract
-
Cited by 161 (0 self)
- Add to MetaCart
(Show Context)
In many important applications, a collection of mutually distrustful parties must perform private computation over multisets. Each party’s input to the function is his private input multiset. In order to protect these private sets, the players perform privacy-preserving computation; that is, no party learns more information about other parties ’ private input sets than what can be deduced from the result. In this paper, we propose efficient techniques for privacy-preserving operations on multisets. By employing the mathematical properties of polynomials, we build a framework of efficient, secure, and composable multiset operations: the union, intersection, and element reduction operations. We apply these techniques to a wide range of practical problems, achieving more efficient results than those of previous work.
Secure multiparty computation of approximations
, 2001
"... Approximation algorithms can sometimes provide efficient solutions when no efficient exact computation is known. In particular, approximations are often useful in a distributed setting where the inputs are held by different parties and may be extremely large. Furthermore, for some applications, the ..."
Abstract
-
Cited by 108 (25 self)
- Add to MetaCart
Approximation algorithms can sometimes provide efficient solutions when no efficient exact computation is known. In particular, approximations are often useful in a distributed setting where the inputs are held by different parties and may be extremely large. Furthermore, for some applications, the parties want to compute a function of their inputs securely, without revealing more information than necessary. In this work we study the question of simultaneously addressing the above efficiency and security concerns via what we call secure approximations. We start by extending standard definitions of secure (exact) computation to the setting of secure approximations. Our definitions guarantee that no additional information is revealed by the approximation beyond what follows from the output of the function being approximated. We then study the complexity of specific secure approximation problems. In particular, we obtain a sublinear-communication protocol for securely approximating the Hamming distance and a polynomial-time protocol for securely approximating the permanent and related #P-hard problems. 1
Secure Multiparty Computation for Privacy-Preserving Data Mining
, 2008
"... In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demon ..."
Abstract
-
Cited by 92 (0 self)
- Add to MetaCart
(Show Context)
In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demonstrate the difficulties involved in constructing highly efficient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not. 1
TASTY: Tool for Automating Secure Two-partY computations
- In ACM Conference on Computer and Communications Security (ACM CCS’10
"... Secure two-party computation allows two untrusting parties to jointly compute an arbitrary function on their respective private inputs while revealing no information beyond the outcome. Existing cryptographic compilers can automatically generate secure computation protocols from high-level specifica ..."
Abstract
-
Cited by 89 (7 self)
- Add to MetaCart
Secure two-party computation allows two untrusting parties to jointly compute an arbitrary function on their respective private inputs while revealing no information beyond the outcome. Existing cryptographic compilers can automatically generate secure computation protocols from high-level specifications, but are often limited in their use and efficiency of generated protocols as they are based on either garbled circuits or (additively) homomorphic encryption only. In this paper we present TASTY, a novel tool for automating, i.e., describing, generating, executing, benchmarking, and comparing, efficient secure two-party computation protocols. TASTY is a new compiler that can generate protocols based on homomorphic encryption and efficient garbled circuits as well as combinations of both, which often yields the most efficient protocols available today. The user provides a high-level description of the computations to be performed on encrypted data in a domain-specific language. This is automatically transformed into a protocol. TASTY provides most recent techniques and optimizations for practical secure two-party computation with low online latency. Moreover, it allows to efficiently evaluate circuits generated by the well-known Fairplay compiler. We use TASTY to compare protocols for secure multiplication based on homomorphic encryption with those based on garbled circuits and highly efficient Karatsuba multiplication. Further, we show how TASTY improves the online latency for securely evaluating the AES functionality by an order of magnitude compared to previous software implementations. TASTY allows to automatically generate efficient secure protocols for many privacy-preserving applications where we consider the use cases for private set intersection and face recognition protocols.
Re: Reliable email
- In Proc. NSDI
, 2006
"... The explosive growth in unwanted email has prompted the development of techniques for the rejection of email, intended to shield recipients from the onerous task of identifying the legitimate email in their inboxes amid a sea of spam. Unfortunately, widely used contentbased filtering systems have co ..."
Abstract
-
Cited by 84 (3 self)
- Add to MetaCart
(Show Context)
The explosive growth in unwanted email has prompted the development of techniques for the rejection of email, intended to shield recipients from the onerous task of identifying the legitimate email in their inboxes amid a sea of spam. Unfortunately, widely used contentbased filtering systems have converted the spam problem into a false positive one: email has become unreliable. Email acceptance techniques complement rejection ones; they can help prevent false positives by filing email into a user’s inbox before it is considered for rejection. Whitelisting, whereby recipients accept email from some set of authorized senders, is one such acceptance technique. We present Reliable Email (RE:), a new whitelisting system that incurs zero false positives among socially connected users. Unlike previous whitelisting systems, which require that whitelists be populated manually, RE: exploits friend-of-friend relationships among email correspondents to populate whitelists automatically. To do so, RE: permits an email’s recipient to discover whether other email users have whitelisted the email’s sender, while preserving the privacy of users ’ email contacts with cryptographic private matching techniques. Using real email traces from two sites, we demonstrate that RE: renders a significant fraction of received email reliable. Our evaluation also shows that RE: can prevent up to 88 % of the false positives incurred by a widely deployed email rejection system, at modest computational cost. 1
On private scalar product computation for privacy-preserving data mining
- In Proceedings of the 7th Annual International Conference in Information Security and Cryptology
, 2004
"... Abstract. In mining and integrating data from multiple sources, there are many privacy and security issues. In several different contexts, the security of the full privacy-preserving data mining protocol depends on the security of the underlying private scalar product protocol. We show that two of t ..."
Abstract
-
Cited by 77 (4 self)
- Add to MetaCart
(Show Context)
Abstract. In mining and integrating data from multiple sources, there are many privacy and security issues. In several different contexts, the security of the full privacy-preserving data mining protocol depends on the security of the underlying private scalar product protocol. We show that two of the private scalar product protocols, one of which was proposed in a leading data mining conference, are insecure. We then describe a provably private scalar product protocol that is based on homomorphic encryption and improve its efficiency so that it can also be used on massive datasets. Keywords: Privacy-preserving data mining, private scalar product protocol, vertically partitioned frequent pattern mining 1
Keyword search and oblivious pseudorandom functions
, 2005
"... We study the problem of privacy-preserving access to a database. Particularly, we consider the problem of privacy-preserving keyword search (KS), where records in the database are accessed according to their associated keywords and where we care for the privacy of both the client and the server. W ..."
Abstract
-
Cited by 64 (5 self)
- Add to MetaCart
(Show Context)
We study the problem of privacy-preserving access to a database. Particularly, we consider the problem of privacy-preserving keyword search (KS), where records in the database are accessed according to their associated keywords and where we care for the privacy of both the client and the server. We provide efficient solutions for various settings of KS, based either on specific assumptions or on general primitives (mainly oblivious transfer). Our general solutions rely on a new connection between KS and the oblivious evaluation of pseudorandom functions (OPRFs). We therefore study both the definition and construction of OPRFs and, as a corollary, give improved constructions of OPRFs that may be of independent interest.
Approximation Algorithms for k-Anonymity
- JOURNAL OF PRIVACY TECHNOLOGY
, 2005
"... We consider the problem of releasing a table containing personal records, while ensuring individual privacy and maintaining data integrity to the extent possible. One of the techniques proposed in the literature is k-anonymization. A release is considered k-anonymous if the information corresponding ..."
Abstract
-
Cited by 62 (5 self)
- Add to MetaCart
We consider the problem of releasing a table containing personal records, while ensuring individual privacy and maintaining data integrity to the extent possible. One of the techniques proposed in the literature is k-anonymization. A release is considered k-anonymous if the information corresponding to any individual in the release cannot be distinguished from that of at least k − 1 other individuals whose information also appears in the release. In order to achieve k-anonymization, some of the entries of the table are either suppressed or generalized (e.g. an Age value of 23 could be changed to the Age range 20-25). The goal is to lose as little information as possible while ensuring that the release is k-anonymous. This optimization problem is referred to as the k-Anonymity problem. We show that the k-Anonymity problem is NP-hard even when the attribute values are ternary and we are allowed only to suppress entries. On the positive side, we provide an O(k)-approximation algorithm for the problem. We also give improved positive results for the interesting cases with specific values of k — in particular, we give a 1.5-approximation algorithm for the special case of 2-Anonymity, and a 2-approximation algorithm for 3-Anonymity.
Two can keep a secret: A distributed architecture for secure database services
- In Proc. CIDR
, 2005
"... Recent trends towards database outsourcing, as well as concerns and laws governing data privacy, have led to great interest in enabling secure database services. Previous approaches to enabling such a service have been based on data encryption, causing a large overhead in query processing. We propos ..."
Abstract
-
Cited by 57 (7 self)
- Add to MetaCart
Recent trends towards database outsourcing, as well as concerns and laws governing data privacy, have led to great interest in enabling secure database services. Previous approaches to enabling such a service have been based on data encryption, causing a large overhead in query processing. We propose a new, distributed architecture that allows an organization to outsource its data management to two untrusted servers while preserving data privacy. We show how the presence of two servers enables efficient partitioning of data so that the contents at any one server are guaranteed not to breach data privacy. We show how to optimize and execute queries in this architecture, and discuss new challenges that emerge in designing the database schema. 1