Results 1  10
of
15
Optimal verification of operations on dynamic sets
 CRYPTO 2011. LNCS
, 2011
"... We study the verification of set operations in the model of authenticated data structures, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
(Show Context)
We study the verification of set operations in the model of authenticated data structures, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source. We present a new authenticated data structure scheme that allows any entity to publicly verify the correctness of primitive sets operations such as intersection, union, subset and set difference. Based on a novel extension of the security properties of bilinearmap accumulators as well as on a primitive called accumulation tree, our authenticated data structure is the first to achieve optimal verification and proof complexity (i.e., only proportional to the size of the query parameters and the answer), as well as optimal update complexity (i.e., constant), and without bearing any extra asymptotic space overhead. Queries (i.e., constructing the proof) are also efficient, adding a logarithmic overhead to the complexity needed to compute the actual answer. In contrast, existing schemes entail high communication and verification costs or high storage costs as they recompute the query over authentic data or precompute answers to all possible queries. Applications of interest include efficient verification of keyword search and database queries. We base the security of our constructions on the bilinear qstrong DiffieHellman assumption.
Authenticated Index Structures for Aggregation Queries in Outsourced Databases
, 2006
"... In an outsourced database system the data owner publishes information through a number of remote, untrusted servers with the goal of enabling clients to access and query the data more efficiently. As clients cannot trust servers, query authentication is an essential component in any outsourced datab ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
In an outsourced database system the data owner publishes information through a number of remote, untrusted servers with the goal of enabling clients to access and query the data more efficiently. As clients cannot trust servers, query authentication is an essential component in any outsourced database system. Clients should be given the capability to verify that the answers provided by the servers are correct with respect to the actual data published by the owner. While existing work provides authentication techniques for selection and projection queries, there is a lack of techniques for authenticating aggregation queries. This article introduces the first known authenticated index structures for aggregation queries. First, we design an index that features good performance characteristics for static environments, where few or no updates occur to the data. Then, we extend these ideas and propose more involved structures for the dynamic case, where the database owner is allowed to update the data arbitrarily. Our structures feature excellent average case performance for authenticating queries with multiple aggregate attributes and multiple selection predicates. We also implement working prototypes of the proposed techniques and experimentally validate the correctness of our ideas. 1
Streaming Authenticated Data Structures
"... We consider the problem of streaming verifiable computation, where both a verifier and a prover observe a stream of n elements x1, x2,..., xn and the verifier can later delegate some computation over the stream to the prover. The prover must return the output of the computation, along with a crypt ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
We consider the problem of streaming verifiable computation, where both a verifier and a prover observe a stream of n elements x1, x2,..., xn and the verifier can later delegate some computation over the stream to the prover. The prover must return the output of the computation, along with a cryptographic proof to be used for verifying the correctness of the output. Due to the nature of the streaming setting, the verifier can only keep small local state (e.g., logarithmic) which must be updatable in a streaming manner and with no interaction with the prover. Such constraints make the problem particularly challenging and rule out applying existing verifiable computation schemes. We propose streaming authenticated data structures, a model that enables efficient verification of data structure queries on a stream. Compared to previous work, we achieve an exponential improvement in the prover’s running time: While previous solutions have linear prover complexity (in the size of the stream), even for queries executing in sublinear time (e.g., set membership), we propose a scheme with O(log M log n) prover complexity, where n is the size of the stream and M is the size of the universe of elements. Our schemes support a series of expressive queries, such as (non)membership, successor, range search and frequency queries, over an ordered universe and even in higher dimensions. The central idea of our construction is a new authentication tree, called generalized hash tree. We instantiate our generalized hash tree with a hash function based on lattices assumptions, showing that it enjoys suitable algebraic properties that traditional Merkle trees lack. We exploit such properties to achieve our results.
Verifiable set operations over outsourced databases
, 2013
"... We study the problem of verifiable delegation of computation over outsourced data, whereby a powerful worker maintains a large data structure for a weak client in a verifiable way. Compared to the wellstudied problem of verifiable computation, this setting imposes additional difficulties since the ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We study the problem of verifiable delegation of computation over outsourced data, whereby a powerful worker maintains a large data structure for a weak client in a verifiable way. Compared to the wellstudied problem of verifiable computation, this setting imposes additional difficulties since the verifier needs to verify consistency of updates succinctly and without maintaining large state. In particular, existing general solutions are far from practical in this setting. We present a scheme for verifiable evaluation of hierarchical set operations (unions, intersections and setdifferences) applied to a collection of dynamically changing sets of elements from a given domain. That is, we consider two types of queries issued by the client: updates (insertions and deletions) and data queries, which consist of “circuits” of unions, intersections, and setdifferences on the current collection of sets. This type of queries comes up in database queries, keyword search and numerous other applications, and indeed our scheme can be effectively used in such scenarios. The computational cost incurred is proportional only to the size of the final outcome set and to the size of the query, and is independent of the cardinalities of the involved sets. The cost of updates is optimal (O(1) modular operations per update). Our construction extends that of [Papamanthou et al., Crypto 2011] and relies on a modified version of the extractable collisionresistant hash function (ECRH) construction, introduced in [Bitansky et al., ITCS 2012] that can be used to succinctly hash univariate polynomials.
Authenticated Data Structures, Generically
"... An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each query result ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each query result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations. This paper presents a generic method, using a simple extension to a MLlike functional programming language we call λ • (lambdaauth), with which one can program authenticated operations over any data structure constructed from standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual; it can then be compiled to code to be run by the prover and verifier. Using a formalization of λ • we prove that all welltyped λ • programs result in code that is secure under the standard cryptographic assumption of collisionresistant hash functions. We have implemented our approach as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, redblack trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the handoptimized data structures developed previously.
ALITHEIA: Towards practical verifiable graph processing
 In CCS
, 2014
"... We consider a scenario in which a data owner outsources storage of a large graph to an untrusted server; the server performs computations on this graph in response to queries from a client (whether the data owner or others), and the goal is to ensure verifiability of the returned results. Existing ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We consider a scenario in which a data owner outsources storage of a large graph to an untrusted server; the server performs computations on this graph in response to queries from a client (whether the data owner or others), and the goal is to ensure verifiability of the returned results. Existing work on verifiable computation (VC) would compile each graph computation to a circuit or a RAM program and then use generic techniques to produce a cryptographic proof of correctness for the result. Such an approach will incur large overhead, especially in the proofcomputation time. In this work we address the above by designing, building, and evaluating ALITHEIA, a nearly practical VC system tailored for graph queries such as computing shortest paths, longest paths, and maximum flow. The underlying principle of ALITHEIA is to minimize the use of generic VC systems by leveraging various algorithmic techniques specific for graphs. This leads to both theoretical and practical improvements. Asymptotically, it improves the complexity of proof computation by at least a logarithmic factor. On the practical side, we show that ALITHEIA achieves significant performance improvements over current stateoftheart (up to a 108 × improvement in proofcomputation time, and a 99.9 % reduction in server storage), while scaling to 200,000node graphs.
Taking Authenticated Range Queries to Arbitrary Dimensions
"... We study the problem of authenticated multidimensional range queries over outsourced databases, where an owner outsources its database to an untrusted server, which maintains it and answers queries to clients. Previous schemes either scale exponentially in the number of query dimensions, or rely on ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study the problem of authenticated multidimensional range queries over outsourced databases, where an owner outsources its database to an untrusted server, which maintains it and answers queries to clients. Previous schemes either scale exponentially in the number of query dimensions, or rely on heuristic data structures without provable bounds. Most importantly, existing work requires an exponential, in the database attributes, number of structures to support queries on every possible combination of dimensions in the database. In this paper, we propose the first schemes that (i) scale linearly with the number of dimensions, and (ii) support queries on any set of dimensions with linear in the number of attributes setup cost and storage. We achieve this through an elaborate fusion of novel and existing setoperation subprotocols. We prove the security of our solutions relying on the qStrong Bilinear DiffieHellman assumption, and experimentally confirm their feasibility.
Authenticated Hash Tables Based on Cryptographic Accumulators
, 2015
"... Suppose a client stores n elements in a hash table that is outsourced to an untrusted server. We address the problem of authenticating the hash table operations, where the goal is to design protocols capable of verifying the correctness of queries and updates performed by the server, thus ensuring ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Suppose a client stores n elements in a hash table that is outsourced to an untrusted server. We address the problem of authenticating the hash table operations, where the goal is to design protocols capable of verifying the correctness of queries and updates performed by the server, thus ensuring the integrity of the remotely stored data across its entire update history. Solutions to this authentication problem allow the client to gain trust in the operations performed by a faulty or even malicious server that lies outside the administrative control of the client. We present two novel schemes that implement an authenticated hash table. An authenticated hash table exports the basic hashtable functionality for maintaining a dynamic set of elements,
CERIAS Tech Report 201007 Structural Signatures: How to Authenticate Graphs Without Leaking
"... ABSTRACT Secure data sharing in multiparty environments such as cloud computing requires that both authenticity and confidentiality of the data be assured. Digital signature schemes are commonly employed for authentication of data. However, no such technique exists for directed graphs, even though ..."
Abstract
 Add to MetaCart
ABSTRACT Secure data sharing in multiparty environments such as cloud computing requires that both authenticity and confidentiality of the data be assured. Digital signature schemes are commonly employed for authentication of data. However, no such technique exists for directed graphs, even though such graphs are one of the most widely used data organization structures. Existing schemes for DAGs are authenticitypreserving but not confidentialitypreserving, and lead to leakage of sensitive information during authentication. In this paper, we propose two schemes on how to authenticate DAGs and directed cyclic graphs without leaking, which are the first such schemes in the literature. It is based on the structure of the graph as defined by depthfirst graph traversals and aggregate signatures. Graphs are structurally different from trees in that they have four types of edges: tree, forward, cross, and backedges in a depthfirst traversal. The fact that an edge is a forward, cross or a backedge conveys information that is sensitive in several contexts. Moreover, backedges pose a more difficult problem than the one posed by forward, and crossedges primarily because backedges add bidirectional properties to graphs. We prove that the proposed technique is both authenticitypreserving and nonleaking. While providing such strong security properties, our scheme is also efficient, as supported by the performance results.
vVote: a Verifiable Voting System Previously titled:
, 2014
"... ar X iv:su bm it/ ..."
(Show Context)