Results 1  10
of
13
NonInteractive Verifiable Computing: Outsourcing Computation to Untrusted Workers
, 2009
"... Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out co ..."
Abstract

Cited by 97 (10 self)
 Add to MetaCart
Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi. The verification of the proof should require substantially less computational effort than computing F(xi) from scratch. We present a protocol that allows the worker to return a computationallysound, noninteractive proof that can be verified in O(m) time, where m is the bitlength of the output of F. The protocol requires a onetime preprocessing stage by the client which takes O(C) time, where C is the smallest Boolean circuit computing F. Our scheme also provides input and output privacy for the client, meaning that the workers do not learn any information about the xi or yi values. 1
Fully Homomorphic Encryption from RingLWE and Security for Key Dependent Messages
 in Advances in Cryptology—CRYPTO 2011, Lect. Notes in Comp. Sci. 6841 (2011
"... Abstract. We present a somewhat homomorphic encryption scheme that is both very simple to describe and analyze, and whose security (quantumly) reduces to the worstcase hardness of problems on ideal lattices. We then transform it into a fully homomorphic encryption scheme using standard “squashing ” ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Abstract. We present a somewhat homomorphic encryption scheme that is both very simple to describe and analyze, and whose security (quantumly) reduces to the worstcase hardness of problems on ideal lattices. We then transform it into a fully homomorphic encryption scheme using standard “squashing ” and “bootstrapping ” techniques introduced by Gentry (STOC 2009). One of the obstacles in going from “somewhat ” to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key. Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worstcase problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is s, and our public key is (a, b = as + 2e), where s, a, e are all degree (n − 1) integer polynomials whose coefficients are independently drawn from easy to sample distributions. 1
Semantic security under relatedkey attacks and applications
 Cited on page 4.) 16 M. Bellare. New proofs for NMAC and HMAC: Security without collisionresistance. In C. Dwork, editor, CRYPTO 2006, volume 4117 of LNCS
, 2011
"... In a relatedkey attack (RKA) an adversary attempts to break a cryptographic primitive by invoking the primitive with several secret keys which satisfy some known, or even chosen, relation. We initiate a formal study of RKA security for randomized encryption schemes. We begin by providing general de ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
In a relatedkey attack (RKA) an adversary attempts to break a cryptographic primitive by invoking the primitive with several secret keys which satisfy some known, or even chosen, relation. We initiate a formal study of RKA security for randomized encryption schemes. We begin by providing general definitions for semantic security under passive and active RKAs. We then focus on RKAs in which the keys satisfy known linear relations over some Abelian group. We construct simple and efficient schemes which resist such RKAs even when the adversary can choose the linear relation adaptively during the attack. More concretely, we present two approaches for constructing RKAsecure encryption schemes. The first is based on standard randomized encryption schemes which additionally satisfy a natural “keyhomomorphism” property. We instantiate this approach under numbertheoretic or latticebased assumptions such as the Decisional DiffieHellman (DDH) assumption and the Learning Noisy Linear Equations assumption. Our second approach is based on RKAsecure pseudorandom generators. This approach can yield either deterministic, onetime use schemes with optimal ciphertext size or randomized unlimited use schemes. We instantiate this approach by constructing a simple RKAsecure pseurodandom generator
Careful with composition: Limitations of the indifferentiability framework
 EUROCRYPT 2011, volume 6632 of LNCS
, 2011
"... We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We exhibit a hashbased storage auditing scheme which is provably secure in the randomoracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any cryptosystem. We characterize the uncovered limitation of the indifferentiability framework by showing that the formalizations used thus far implicitly exclude security notions captured by experiments that have multiple, disjoint adversarial stages. Examples include deterministic publickey encryption (PKE), passwordbased cryptography, hash function nonmalleability, keydependent message security, and more. We formalize a stronger notion, reset indifferentiability, that enables an indifferentiabilitystyle composition theorem covering such multistage security notions, but then show that practical hash constructions cannot be reset indifferentiable. We discuss how these limitations also affect the universal composability framework. We finish by showing the chosendistribution attack security (which requires a multistage game) of some important publickey encryption schemes built using a hash construction paradigm introduced by Dodis, Ristenpart, and Shrimpton. 1
How to garble arithmetic circuits
 In Symposium on Foundations of Computer Science (FOCS ’11
, 2011
"... Yao’s garbled circuit construction transforms a boolean circuit C: {0, 1} n → {0, 1} m into a “garbled circuit ” Ĉ along with n pairs of kbit keys, one for each input bit, such that Ĉ together with the n keys corresponding to an input x reveal C(x) and no additional information about x. The garbled ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Yao’s garbled circuit construction transforms a boolean circuit C: {0, 1} n → {0, 1} m into a “garbled circuit ” Ĉ along with n pairs of kbit keys, one for each input bit, such that Ĉ together with the n keys corresponding to an input x reveal C(x) and no additional information about x. The garbled circuit construction is a central tool for constantround secure computation and has several other applications. Motivated by these applications, we suggest an efficient arithmetic variant of Yao’s original construction. Our construction transforms an arithmetic circuit C: Zn → Zm over integers from a bounded (but possibly exponential) range into a garbled circuit Ĉ along with n affine functions Li: Z → Zk such that Ĉ together with the n integer vectors Li(xi) reveal C(x) and no additional information about x. The security of our construction relies on the intractability of the learning with errors (LWE) problem. 1
Circular and KDM security for identitybased encryption
 In Public Key Cryptography
, 2012
"... We initiate the study of security for keydependent messages (KDM), sometimes also known as “circular ” or “clique ” security, in the setting of identitybased encryption (IBE). Circular/KDM security requires that ciphertexts preserve secrecy even when they encrypt messages that may depend on the se ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We initiate the study of security for keydependent messages (KDM), sometimes also known as “circular ” or “clique ” security, in the setting of identitybased encryption (IBE). Circular/KDM security requires that ciphertexts preserve secrecy even when they encrypt messages that may depend on the secret keys, and arises in natural usage scenarios for IBE. We construct an IBE system that is circular secure for affine functions of users ’ secret keys, based on the learning with errors (LWE) problem (and hence on worstcase lattice problems). The scheme is secure in the standard model, under a natural extension of a selectiveidentity attack. Our three main technical contributions are (1) showing the circular/KDMsecurity of a “dual”style LWE publickey cryptosystem, (2) proving the hardness of a version of the “extended LWE ” problem due to O’Neill, Peikert and Waters (CRYPTO’11), and (3) building an IBE scheme around the dualstyle system using a novel latticebased “allbutd ” trapdoor function. 1
Encoding Functions with Constant Online Rate or How to Compress Garbled Circuits Keys
, 2013
"... Randomized encodings of functions can be used to replace a “complex ” function f(x) by a “simpler ” randomized mapping ˆ f(x; r) whose output distribution on an input x encodes the value of f(x) and hides any other information. One desirable feature of randomized encodings is low online complexity. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Randomized encodings of functions can be used to replace a “complex ” function f(x) by a “simpler ” randomized mapping ˆ f(x; r) whose output distribution on an input x encodes the value of f(x) and hides any other information. One desirable feature of randomized encodings is low online complexity. That is, the goal is to obtain a randomized encoding ˆ f of f in which most of the output can be precomputed and published before seeing the input x. When the input x is available, it remains to publish only a short string ˆx, where the online complexity of computing ˆx is independent of (and is typically much smaller than) the complexity of computing f. Yao’s garbled circuit construction gives rise to such randomized encodings in which the online part ˆx consists of n encryption keys of length κ each, where n = x  and κ is a security parameter. Thus, the online rate ˆx/x  of this encoding is proportional to the security parameter κ. In this paper, we show that the online rate can be dramatically improved. Specifically, we show how to encode any polynomialtime computable function f: {0, 1} n → {0, 1} m(n) with online rate of 1+o(1) and with nearly linear online computation. More concretely, the online part ˆx consists of an nbit string and a single encryption key. These constructions can be based on
Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers
, 2010
"... As society rushes to digitize sensitive information and services, it is imperative to adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
As society rushes to digitize sensitive information and services, it is imperative to adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [8, 72, 104].
In this dissertation, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected ofcommodity systems. At a high level, we support this premise by developing techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. Rather than entrusting her data to the mountain of buggy code likely running on her computer, we construct an ondemand secure execution environment which can perform securitysensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, nonsecuritysensitive software retains the same abundance of features and performance it enjoys today.
Having established an environment for secure code execution on an individual computer, we then show how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on endhosts and trust the results within the network. Lastly, we extend the user’s trust one more step to encompass computations performed on a remote host (e.g., in the cloud). We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a noninteractive, asymptotically optimal (with respect to CPU and bandwidth) manner.
Thus, extending a user’s trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.
Ciphers that Securely Encipher their own Keys
 ACM CCS (COMPUTER AND COMMUNICATIONS SECURITY)
, 2011
"... In response to needs of disk encryption standardization bodies, we provide the first tweakable ciphers that are proven to securely encipher their own keys. We provide both a narrowblock design StE and a wideblock design EtE. Our proofs assume only standard PRPCCA security of the underlying tweaka ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In response to needs of disk encryption standardization bodies, we provide the first tweakable ciphers that are proven to securely encipher their own keys. We provide both a narrowblock design StE and a wideblock design EtE. Our proofs assume only standard PRPCCA security of the underlying tweakable ciphers.
Instantiating Random Oracles via UCEs
, 2013
"... This paper provides a (standardmodel) notion of security for (keyed) hash functions, called UCE, that we show enables instantiation of random oracles (ROs) in a fairly broad and systematic way. Goals and schemes we consider include deterministic PKE; messagelocked encryption; hardcore functions; p ..."
Abstract
 Add to MetaCart
This paper provides a (standardmodel) notion of security for (keyed) hash functions, called UCE, that we show enables instantiation of random oracles (ROs) in a fairly broad and systematic way. Goals and schemes we consider include deterministic PKE; messagelocked encryption; hardcore functions; pointfunction obfuscation; OAEP; encryption secure for keydependent messages; encryption secure under relatedkey attack; proofs of storage; and adaptivelysecure garbled circuits with short tokens. We can take existing, natural and efficient ROM schemes and show that the instantiated scheme resulting from replacing the RO with a UCE function is secure in the standard model. In several cases this results in the first standardmodel schemes for these goals. The definition of UCEsecurity itself is quite simple, asking that outputs of the function look random given some “leakage, ” even if the adversary knows the key, as long as the leakage does not permit the adversary to compute the inputs.