Results 1  10
of
80
Pors: proofs of retrievability for large files
 In CCS ’07: Proceedings of the 14th ACM conference on Computer and communications security
, 2007
"... Abstract. In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or backup service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient fo ..."
Abstract

Cited by 254 (8 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or backup service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety. A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes. In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work. We view PORs as an important tool for semitrusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide qualityofservice guarantees, i.e., show that a file is retrievable within a certain time bound. Key words: storage systems, storage security, proofs of retrievability, proofs of knowledge 1
NonInteractive Verifiable Computing: Outsourcing Computation to Untrusted Workers
, 2009
"... Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out co ..."
Abstract

Cited by 221 (13 self)
 Add to MetaCart
(Show Context)
Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi. The verification of the proof should require substantially less computational effort than computing F(xi) from scratch. We present a protocol that allows the worker to return a computationallysound, noninteractive proof that can be verified in O(m) time, where m is the bitlength of the output of F. The protocol requires a onetime preprocessing stage by the client which takes O(C) time, where C is the smallest Boolean circuit computing F. Our scheme also provides input and output privacy for the client, meaning that the workers do not learn any information about the xi or yi values. 1
Bind: A finegrained attestation service for secure distributed systems
 In In Proceedings of the 2005 IEEE Symposium on Security and Privacy
, 2005
"... In this paper, we propose BIND (Binding Instructions aNd Data), 1 a finegrained attestation service for securing distributed systems. Code attestation has recently received considerable attention in trusted computing. However, current code attestation technology is relatively immature. First, due t ..."
Abstract

Cited by 98 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we propose BIND (Binding Instructions aNd Data), 1 a finegrained attestation service for securing distributed systems. Code attestation has recently received considerable attention in trusted computing. However, current code attestation technology is relatively immature. First, due to the great variability in software versions and configurations, verification of the hash is difficult. Second, the timeofuse and timeofattestation discrepancy remains to be addressed, since the code may be correct at the time of the attestation, but it may be compromised by the time of use. The goal of BIND is to address these issues and make code attestation more usable in securing distributed systems. BIND offers the following properties: 1) BIND performs finegrained attestation. Instead of attesting to the entire memory content, BIND attests only to the piece of code we are concerned about. This greatly simplifies verification. 2) BIND narrows the gap between timeofattestation and timeofuse. BIND measures a piece of code immediately before it is executed and uses a sandboxing mechanism to protect the execution of the attested code. 3) BIND ties the code attestation with the data that the code produces, such that we can pinpoint what code has been run to generate that data. In addition, by incorporating the verification of input data integrity into the attestation, BIND offers transitive integrity verification, i.e., through one signature, we can vouch for the entire chain of processes that have performed transformations over a piece of data. BIND offers a general solution toward establishing a trusted environment for distributed system designers.
OurGrid: An Approach to Easily Assemble Grids with Equitable Resource Sharing
, 2003
"... Available grid technologies like the Globus Toolkit make possible for one to run a parallel application on resources distributed across several administrative domains. Most grid computing users, however, don't have access to more than a handful of resources onto which they can use this technolo ..."
Abstract

Cited by 88 (14 self)
 Add to MetaCart
Available grid technologies like the Globus Toolkit make possible for one to run a parallel application on resources distributed across several administrative domains. Most grid computing users, however, don't have access to more than a handful of resources onto which they can use this technologies. This happens mainly because gaining access to resources still depends on personal negotiations between the user and each resource owner. To address this problem, we are developing the OurGrid resources sharing system, a peertopeer network of sites that share resources equitably in order to form a grid to which they all have access. The resources are shared accordingly to a network of favors model, in which each peer prioritizes those who have credit in their past history of bilateral interactions. The emergent behavior in the system is that peers that contribute more to the community are prioritized when they request resources. We expect, with OurGrid, to solve the access gaining problem for users of bagoftasks applications (those parallel applications whose tasks are independent).
Pinocchio: Nearly practical verifiable computation
 In Proceedings of the 34th IEEE Symposium on Security and Privacy, Oakland ’13
, 2013
"... Abstract To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumption ..."
Abstract

Cited by 69 (6 self)
 Add to MetaCart
Abstract To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio's verification time is typically 10ms: 57 orders of magnitude less than previous work; indeed Pinocchio is the first generalpurpose system to demonstrate verification cheaper than native execution (for some apps). Pinocchio also reduces the worker's proof effort by an additional 1960×. As an additional feature, Pinocchio generalizes to zeroknowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an endtoend toolchain that compiles a subset of C into programs that implement the verifiable computation protocol.
Uncheatable grid computing
 In 24th IEEE International Conference on Distributed Computing Systems
, 2004
"... Grid computing is a type of distributed computing that has shown promising applications in many fields. A great concern in grid computing is the cheating problem described in the following: a participant is given D = {x1,...,xn}, it needs to compute f(x) for all x ∈ D and return the results of inter ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
(Show Context)
Grid computing is a type of distributed computing that has shown promising applications in many fields. A great concern in grid computing is the cheating problem described in the following: a participant is given D = {x1,...,xn}, it needs to compute f(x) for all x ∈ D and return the results of interest to the supervisor. How does the supervisor efficiently ensure that the participant has computed f(x) for all the inputs in D, rather than a subset of it? If participants get paid for conducting the task, there are incentives for cheating. In this paper, we propose a novel scheme to achieve the uncheatable grid computing. Our scheme uses a sampling technique and the Merkletree based commitment technique to achieve efficient and viable uncheatable grid computing. 1.
Verifiable delegation of computation over large datasets
 In Proceedings of the 31st annual conference on Advances in cryptology, CRYPTO’11
, 2011
"... We study the problem of computing on large datasets that are stored on an untrusted server. We follow the approach of amortized verifiable computation introduced by Gennaro, Gentry, and Parno in CRYPTO 2010. We present the first practical verifiable computation scheme for high degree polynomial func ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
(Show Context)
We study the problem of computing on large datasets that are stored on an untrusted server. We follow the approach of amortized verifiable computation introduced by Gennaro, Gentry, and Parno in CRYPTO 2010. We present the first practical verifiable computation scheme for high degree polynomial functions. Such functions can be used, for example, to make predictions based on polynomials fitted to a large number of sample points in an experiment. In addition to the many noncryptographic applications of delegating high degree polynomials, we use our verifiable computation scheme to obtain new solutions for verifiable keyword search, and proofs of retrievability. Our constructions are based on the DDH assumption and its variants, and achieve adaptive security, which was left as an open problem by Gennaro et al (albeit for general functionalities). Our second result is a primitive which we call a verifiable database (VDB). Here, a weak client outsources a large table to an untrusted server, and makes retrieval and update queries. For each query, the server provides a response and a proof that the response was computed correctly. The goal is to minimize the resources required by the client. This is made particularly challenging if the number of update queries is unbounded. We present a VDB scheme based on the hardness of the subgroup
Making argument systems for outsourced computation practical (sometimes
 In NDSS
, 2012
"... This paper describes the design, implementation, and evaluation of a system for performing verifiable outsourced computation. It has long been known that (1) this problem can be solved in theory using probabilistically checkable proofs (PCPs) coupled with modern cryptographic tools, and (2) these ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
This paper describes the design, implementation, and evaluation of a system for performing verifiable outsourced computation. It has long been known that (1) this problem can be solved in theory using probabilistically checkable proofs (PCPs) coupled with modern cryptographic tools, and (2) these solutions have wholly impractical performance, according to the conventional (and wellfounded) wisdom. Our goal is to challenge (2), with a built system that implements an argument system based on PCPs. We describe a generalpurpose system that builds on work of Ishai et al. (CCC ’07) and incorporates new theoretical work to improve performance by 20 orders of magnitude. The system is (arguably) practical in some cases, suggesting that, as a tool for building secure systems, PCPs are not a lost cause. 1
Adaptive ReputationBased Scheduling on Unreliable Distributed Infrastructures
, 2007
"... This paper addresses the inherent unreliability and instability of worker nodes in largescale donationbased distributed infrastructures such as P2P and Grid systems. We present adaptive scheduling techniques that can mitigate this uncertainty and significantly outperform current approaches. In th ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
(Show Context)
This paper addresses the inherent unreliability and instability of worker nodes in largescale donationbased distributed infrastructures such as P2P and Grid systems. We present adaptive scheduling techniques that can mitigate this uncertainty and significantly outperform current approaches. In this work, we consider nodes that execute tasks via donated computational resources and may behave erratically or maliciously. We present a model in which reliability is not a binary property but a statistical one based on a node’s prior performance and behavior. We use this model to construct several reputationbased scheduling algorithms that employ estimated reliability ratings of worker nodes for efficient task allocation. Our scheduling algorithms are designed to adapt to changing system conditions as well as nonstationary node reliability. Through simulation we demonstrate that our algorithms can significantly improve throughput, while maintaining a very high success rate of task completion. Our results suggest that reputationbased scheduling can handle wide variety of worker populations, including nonstationary behavior, with overhead that scales well with system size. We also show that our adaptation mechanism allows the application designer finegrain control over desired performance metrics.
Verifying computations with state
"... When outsourcing computations to the cloud or other thirdparties, a key issue for clients is the ability to verify the results. Recent work in proofbased verifiable computation, building on deep results in complexity theory and cryptography, has made significant progress on this problem. However, ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
When outsourcing computations to the cloud or other thirdparties, a key issue for clients is the ability to verify the results. Recent work in proofbased verifiable computation, building on deep results in complexity theory and cryptography, has made significant progress on this problem. However, all existing systems require computational models that do not incorporate state. This limits these systems to simplistic programming idioms and rules out computations where the client cannot materialize all of the input (e.g., very large MapReduce instances or database queries). This paper describes Pantry, the first built system that incorporates state. Pantry composes the machinery of proofbased verifiable computation with ideas from untrusted storage: the client expresses its computation in terms of digests that attests to state, and verifiably outsources that computation. Besides the boon to expressiveness, the client can gain from outsourcing even when the computation is sublinear in the input size. We describe a verifiable MapReduce application and a queriable database, among other simple applications. Although the resulting applications result in server overhead that is higher than we would like, Pantry is the first system to provide verifiability for realistic applications in a realistic programming model. 1