Results 1  10
of
96
Fully homomorphic encryption using ideal lattices
 In Proc. STOC
, 2009
"... We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitra ..."
Abstract

Cited by 267 (11 self)
 Add to MetaCart
We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Latticebased cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a publickey ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable – i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a serveraided cryptosystem.
Using Secure Coprocessors
, 1994
"... The views and conclusions in this document are those of the authors and do not necessarily represent the official policies or endorsements of any of the research sponsors. How do we build distributed systems that are secure? Cryptographic techniques can be used to secure the communications between p ..."
Abstract

Cited by 150 (8 self)
 Add to MetaCart
The views and conclusions in this document are those of the authors and do not necessarily represent the official policies or endorsements of any of the research sponsors. How do we build distributed systems that are secure? Cryptographic techniques can be used to secure the communications between physically separated systems, but this is not enough: we must be able to guarantee the privacy of the cryptographic keys and the integrity of the cryptographic functions, in addition to the integrity of the security kernel and access control databases we have on the machines. Physical security is a central assumption upon which secure distributed systems are built; without this foundation even the best cryptosystem or the most secure kernel will crumble. In this thesis, I address the distributed security problem by proposing the addition of a small, physically secure hardware module, a secure coprocessor, to standard workstations and PCs. My central axiom is that secure coprocessors are able to maintain the privacy of the data they process. This thesis attacks the distributed security problem from multiple sides. First, I analyze the security properties of existing system components, both at the hardware and
Designated Verifier Proofs and Their Applications
, 1996
"... For many proofs of knowledge it is important that only the verifier designated by the confirmer can obtain any conviction of the correctness of the proof. A good example of such a situation is for undeniable signatures, where the confirmer of a signature wants to make sure that only the intended ver ..."
Abstract

Cited by 134 (5 self)
 Add to MetaCart
For many proofs of knowledge it is important that only the verifier designated by the confirmer can obtain any conviction of the correctness of the proof. A good example of such a situation is for undeniable signatures, where the confirmer of a signature wants to make sure that only the intended verifier(s) in fact can be convinced about the validity or invalidity of the signature. Generally, authentication of messages and offtherecord messages are in conflict with each other. We show how, using designation of verifiers, these notions can be combined, allowing authenticated but private conversations to take place. Our solution guarantees that only the specified verifier can be convinced by the proof, even if he shares all his secret information with entities that want to get convinced. Our solution is based on trapdoor commitments [4], allowing the designated verifier to open up commitments in any way he wants. We demonstrate how a trapdoor commitment scheme can be used to constr...
Privacypreserving datamining on vertically partitioned databases
 In CRYPTO
, 2004
"... Abstract. In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved tha ..."
Abstract

Cited by 86 (25 self)
 Add to MetaCart
Abstract. In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sublinear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless. As databases grow increasingly large, the possibility of being able to query only a sublinear number of times becomes realistic. We further investigate this situation, generalizing the previous work in two important directions: multiattribute databases (previous work dealt only with singleattribute databases) and vertically partitioned databases, in which different subsets of attributes are stored in different databases. In addition, we show how to use our techniques for datamining on published noisy statistics.
Dyad: A System for Using Physically Secure Coprocessors
 Proceedings of the Joint HarvardMIT Workshop on Technological Strategies for the Protection of Intellectual Property in the Network Multimedia Environment
, 1991
"... The Dyad project at Carnegie Mellon University is using physically secure coprocessors to achieve new protocols and systems addressing a number of perplexing security problems. These coprocessors can be produced as boards or integrated circuit chips and can be directly inserted in standard workstati ..."
Abstract

Cited by 82 (1 self)
 Add to MetaCart
The Dyad project at Carnegie Mellon University is using physically secure coprocessors to achieve new protocols and systems addressing a number of perplexing security problems. These coprocessors can be produced as boards or integrated circuit chips and can be directly inserted in standard workstations or PCstyle computers. This paper presents a set of security problems and easily implementable solutions that exploit the power of physically secure coprocessors: (1) protecting the integrity of publicly accessible workstations, (2) tamperproof accounting/audit trails, (3) copy protection, and (4) electronic currency without centralized servers. We outline the architectural requirements for the use of secure coprocessors. 1 Introduction and Motivation The Dyad project at Carnegie Mellon University is using physically secure coprocessors to achieve new protocols and systems addressing a number of perplexing security problems. These coprocessors can be produced as boards or integrated ...
A Model for Secure Protocols and Their Compositions (Extended Abstract)
 IEEE Transactions on Software Engineering
, 1996
"... We give a formal model of protocol security. Our model allows us to reason about the security of protocols, and considers issues of beliefs of agents, time, and secrecy. We prove a composition theorem which allows us to state sufficient conditions on two secure protocols A and B such that they may b ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
We give a formal model of protocol security. Our model allows us to reason about the security of protocols, and considers issues of beliefs of agents, time, and secrecy. We prove a composition theorem which allows us to state sufficient conditions on two secure protocols A and B such that they may be combined to form a new secure protocol C. Moreover, we give counterexamples to show that when the conditions are not met, the protocol C may not be secure. I. Introduction What does it mean for a protocol to be secure? How can we reason about secure protocols? If we combine two existing protocols into a common protocol, what can we say about the security of the new protocol? This paper develops a family of tools for reasoning about protocol security. We adopt a modelbased approach for defining protocol security properties. This allows us to describe security properties in much greater detail and precision than previous frameworks for reasoning about protocol security. Some of the most a...
Another Look at “Provable Security"
, 2004
"... We give an informal analysis and critique of several typical “provable security” results. In some cases there are intuitive but convincing arguments for rejecting the conclusions suggested by the formal terminology and “proofs,” whereas in other cases the formalism seems to be consistent with common ..."
Abstract

Cited by 59 (12 self)
 Add to MetaCart
We give an informal analysis and critique of several typical “provable security” results. In some cases there are intuitive but convincing arguments for rejecting the conclusions suggested by the formal terminology and “proofs,” whereas in other cases the formalism seems to be consistent with common sense. We discuss the reasons why the search for mathematically convincing theoretical evidence to support the security of publickey systems has been an important theme of researchers. But we argue that the theoremproof paradigm of theoretical mathematics is often of limited relevance here and frequently leads to papers that are confusing and misleading. Because our paper is aimed at the general mathematical public, it is selfcontained and as jargonfree as possible.
How to make replicated data secure
 Advances in Cryptology  CRYPTO
, 1988
"... Many distributed systems manage some form of longlived data, such as files or data bases. The performance and faulttolerance of such systems may be enhanced if the repositories for the data are physically distributed. Nevertheless, distribution makes security more difficult, since it may be diffic ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
Many distributed systems manage some form of longlived data, such as files or data bases. The performance and faulttolerance of such systems may be enhanced if the repositories for the data are physically distributed. Nevertheless, distribution makes security more difficult, since it may be difficult to ensure that each repository is physically secure, particularly if the number of repositories is large. This paper proposes new techniques for ensuring the security of longlived, physically distributed data. These techniques adapt replication protocols for faulttolerance to the more demanding requirements of security. For a given threshold value, one set of protocols ensures that an adversary cannot ascertain the state of a data object by observing the contents of fewer than a threshold of repositories. These protocols are cheap; the message traffic needed to tolerate a given number of compromised repositories is only slightly more than the message traffic needed to tolerate the same number of failures. A second set of protocols ensures that an object’s state cannot be altered by an adversary who can modify the contents of fewer than a threshold of repositories. These protocols are more expensive; to tolerate t1 compromised repositories, clients executing certain operations must communicate with t1 additional sites.
Advances in Cryptographic Voting Systems
, 2006
"... Democracy depends on the proper administration of popular elections. Voters should receive assurance that their intent was correctly captured and that all eligible votes were correctly tallied. The election system as a whole should ensure that voter coercion is unlikely, even when voters are willing ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
Democracy depends on the proper administration of popular elections. Voters should receive assurance that their intent was correctly captured and that all eligible votes were correctly tallied. The election system as a whole should ensure that voter coercion is unlikely, even when voters are willing to be influenced. These conflicting requirements present a significant challenge: how can voters receive enough assurance to trust the election result, but not so much that they can prove to a potential coercer how they voted? This dissertation explores cryptographic techniques for implementing verifiable, secretballot elections. We present the power of cryptographic voting, in particular its ability to successfully achieve both verifiability and ballot secrecy, a combination that cannot be achieved by other means. We review a large portion of the literature on cryptographic voting. We propose three novel technical ideas: 1. a simple and inexpensive paperbase cryptographic voting system with some interesting advantages over existing techniques, 2. a theoretical model of incoercibility for human voters with their inherent limited computational ability, and a new ballot casting system that fits the new definition, and
Simulatable auditing
 In PODS
, 2005
"... Given a data set consisting of private information about individuals, we consider the online query auditing problem: given a sequence of queries that have already been posed about the data, their corresponding answers – where each answer is either the true answer or “denied ” (in the event that reve ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
Given a data set consisting of private information about individuals, we consider the online query auditing problem: given a sequence of queries that have already been posed about the data, their corresponding answers – where each answer is either the true answer or “denied ” (in the event that revealing the answer compromises privacy) – and given a new query, deny the answer if privacy may be breached or give the true answer otherwise. A related problem is the offline auditing problem where one is given a sequence of queries and all of their true answers and the goal is to determine if a privacy breach has already occurred. We uncover the fundamental issue that solutions to the offline auditing problem cannot be directly used to solve the online auditing problem since query denials may leak information. Consequently, we introduce a new model called simulatable auditing where query denials provably do not leak information. We demonstrate that max queries may be audited in this simulatable paradigm under the classical definition of privacy where a breach occurs if a sensitive value is fully compromised. We also introduce a probabilistic notion of (partial) compromise. Our privacy definition requires that the apriori probability that a sensitive value lies within some small interval is not that different from the posterior probability (given the query answers). We demonstrate that sum queries can be audited in a simulatable fashion under this privacy definition.