Results 1 - 10
of
64
DEPSKY: Dependable and Secure Storage in a Cloud-of-Clouds
, 2013
"... The increasing popularity of cloud storage services has lead companies that handle critical data to think about using these services for their storage needs. Medical record databases, large biomedical datasets, historical information about power systems and financial data are some examples of critic ..."
Abstract
-
Cited by 85 (15 self)
- Add to MetaCart
The increasing popularity of cloud storage services has lead companies that handle critical data to think about using these services for their storage needs. Medical record databases, large biomedical datasets, historical information about power systems and financial data are some examples of critical data that could be moved to the cloud. However, the reliability and security of data stored in the cloud still remain major concerns. In this work we present DEPSKY, a system that improves the availability, integrity and confidentiality of information stored in the cloud through the encryption, encoding and replication of the data on diverse clouds that form a cloud-of-clouds. We deployed our system using four commercial clouds and used PlanetLab to run clients accessing the service from different countries. We observed that our protocols improved the perceived availability and, in most cases, the access latency when compared with cloud providers individually. Moreover, the monetary costs of using DEPSKY on this scenario is at most twice the cost of using a single cloud, which is optimal and seems to be a reasonable cost, given the benefits.
Depot: Cloud storage with minimal trust
"... Abstract: We describe the design, implementation, and evaluation of Depot, a cloud storage system that minimizes trust assumptions. Depot assumes less than any prior system about the correct operation of participating hosts—Depot tolerates Byzantine failures, including malicious or buggy behavior, b ..."
Abstract
-
Cited by 75 (8 self)
- Add to MetaCart
(Show Context)
Abstract: We describe the design, implementation, and evaluation of Depot, a cloud storage system that minimizes trust assumptions. Depot assumes less than any prior system about the correct operation of participating hosts—Depot tolerates Byzantine failures, including malicious or buggy behavior, by any number of clients or servers—yet provides safety and availability guarantees (on consistency, staleness, durability, and recovery) that are useful. The key to safeguarding safety without sacrificing availability (and vice versa) in this environment is to join forks: participants (clients and servers) that observe inconsistent behaviors by other participants can join their forked view into a single view that is consistent with what each individually observed. Our experimental evaluation suggests that the costs of protecting the system are modest. Depot adds a few hundred bytes of metadata to each update and each stored object, and requires hashing and signing each update. 1
NCCloud: Applying Network Coding for the Storage Repair in a Cloud-of-Clouds
"... To provide fault tolerance for cloud storage, recent studies propose to stripe data across multiple cloud vendors. However, if a cloud suffers from a permanent failure and loses all its data, then we need to repair the lost data from other surviving clouds to preserve data redundancy. We present a p ..."
Abstract
-
Cited by 29 (6 self)
- Add to MetaCart
(Show Context)
To provide fault tolerance for cloud storage, recent studies propose to stripe data across multiple cloud vendors. However, if a cloud suffers from a permanent failure and loses all its data, then we need to repair the lost data from other surviving clouds to preserve data redundancy. We present a proxy-based system for multiple-cloud storage called NCCloud, which aims to achieve cost-effective repair for a permanent single-cloud failure. NCCloud is built on top of network-coding-based storage schemes called regenerating codes. Specifically, we propose an implementable design for the functional minimumstorage regenerating code (F-MSR), which maintains the same data redundancy level and same storage requirement as in traditional erasure codes (e.g., RAID-6), but uses less repair traffic. We implement a proof-of-concept prototype of NCCloud and deploy it atop local and commercial clouds. We validate the cost effectiveness of F-MSR in storage repair over RAID-6, and show that both schemes have comparable response time performance in normal cloud storage operations. 1
1 Secure Overlay Cloud Storage with Access Control and Assured Deletion
"... Abstract—We can now outsource data backups off-site to third-party cloud storage services so as to reduce data management costs. However, we must provide security guarantees for the outsourced data, which is now maintained by third parties. We design and implement FADE, a secure overlay cloud storag ..."
Abstract
-
Cited by 17 (0 self)
- Add to MetaCart
(Show Context)
Abstract—We can now outsource data backups off-site to third-party cloud storage services so as to reduce data management costs. However, we must provide security guarantees for the outsourced data, which is now maintained by third parties. We design and implement FADE, a secure overlay cloud storage system that achieves fine-grained, policy-based access control and file assured deletion. It associates outsourced files with file access policies, and assuredly deletes files to make them unrecoverable to anyone upon revocations of file access policies. To achieve such security goals, FADE is built upon a set of cryptographic key operations that are self-maintained by a quorum of key managers that are independent of third-party clouds. In particular, FADE acts as an overlay system that works seamlessly atop today’s cloud storage services. We implement a proof-of-concept prototype of FADE atop Amazon S3, one of today’s cloud storage services. We conduct extensive empirical studies, and demonstrate that FADE provides security protection for outsourced data, while introducing only minimal performance and monetary cost overhead. Our work provides insights of how to incorporate value-added security features into today’s cloud storage services. Keywords—access control, assured deletion, backup/recovery, cloud storage 1
Cloud Computing security: From single to multi-clouds
- In Proceed-ings of the HICSS
, 2011
"... Abstract General Terms Security ..."
(Show Context)
NCFS: On the Practicality and Extensibility of a Network-Coding-Based Distributed File System
- In Proc. of NetCod
, 2011
"... Abstract—An emerging application of network coding is to improve the robustness of distributed storage. Recent theoretical work has shown that a class of regenerating codes, which are based on the concept of network coding, can improve the data repair performance over traditional storage schemes suc ..."
Abstract
-
Cited by 14 (4 self)
- Add to MetaCart
(Show Context)
Abstract—An emerging application of network coding is to improve the robustness of distributed storage. Recent theoretical work has shown that a class of regenerating codes, which are based on the concept of network coding, can improve the data repair performance over traditional storage schemes such as erasure coding. However, there remain open issues regarding the feasibility of deploying regenerating codes in practical storage systems. We present NCFS, a distributed file system that realizes regenerating codes under real network settings. NCFS transparently stripes data across multiple storage nodes, without requiring the storage nodes to coordinate among themselves. It adopts a layered design that allows extensibility, such that different storage schemes can be readily included into NCFS. We deploy and evaluate our NCFS prototype in different real network settings. In particular, we use NCFS to conduct an empirical study of different storage schemes, including the traditional erasure codes RAID-5 and RAID-6, and a special family of regenerating codes that are based on E-MBR [16]. Our work provides a practical and extensible platform for realizing theories of regenerating codes in distributed file systems. Keywords—network coding, distributed file system, implementation and experimentation I.
Robust Data Sharing with Key-Value Stores
"... A key-value store (KVS) offers functions for storing and retrieving values associated with unique keys. KVSs have become the most popular way to access Internet-scale “cloud” storage systems. We present an efficient wait-free algorithm that emulates multi-reader multi-writer storage from a set of po ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
(Show Context)
A key-value store (KVS) offers functions for storing and retrieving values associated with unique keys. KVSs have become the most popular way to access Internet-scale “cloud” storage systems. We present an efficient wait-free algorithm that emulates multi-reader multi-writer storage from a set of potentially faulty KVS replicas in an asynchronous environment. Our implementation serves an unbounded number of clients that use the storage concurrently. It tolerates crashes of a minority of the KVSs and crashes of any number of clients. Our algorithm minimizes the space overhead at the KVSs and comes in two variants providing regular and atomic semantics, respectively. Compared with prior solutions, it is inherently scalable and allows clients to write concurrently. Because of the limited interface of a KVS, textbook-style solutions for reliable storage either do not work or incur a prohibitively large storage overhead. Our algorithm maintains two copies of the stored value per KVS in the common case, and we show that this is indeed necessary. If there are concurrent write operations, the maximum space complexity of the algorithm grows in proportion to the point contention. A series of simulations explore the behavior of the algorithm, and benchmarks obtained with KVS cloud-storage providers demonstrate its practicality.
88 Scalable Byzantine Computation
"... After almost 30 years of research on Byzantine Agreement (BA), the problem continues to be relevant and to re-invent itself in new ways. This column discusses two new research directions that further push the scale of BA. It suggests new domains where BA can, and perhaps should, be deployed. First, ..."
Abstract
-
Cited by 9 (1 self)
- Add to MetaCart
(Show Context)
After almost 30 years of research on Byzantine Agreement (BA), the problem continues to be relevant and to re-invent itself in new ways. This column discusses two new research directions that further push the scale of BA. It suggests new domains where BA can, and perhaps should, be deployed. First, our main contribution, by Valerie King and Jared Saia, argues for running BA in setting with a large number of nodes (or processors). Valerie and Jared survey new BA protocols whose communication complexity is scalable in the number of participating processors. This, they argue, enables their deployment in larger-scale domains for which BA was considered infeasible before. The second contribution, by Marko Vukolić, considers another emerging domain for BA. It calls for wider-scale deployment of BA protocols, not among many processors, but rather over multiple cloud computing providers. The column ends with a short announcement about Morgan Claypool’s new monograph series on Distributed Computing Theory, edited by Nancy Lynch. Many thanks to Valerie, Jared, and Marko for sharing their insights! Call for contributions: I welcome suggestions for material to include in this column, including news, reviews, opinions, open problems, tutorials and surveys, either exposing the community to new and interesting topics, or providing new insight on well-studied topics by organizing them in new ways.
On Limitations of Using Cloud Storage for Data Replication
"... Abstract—Cloud storage services often provide a key-value store (KVS) functionality, an object-based interface for accessing a collection of unstructured data items or blobs. Every blob is associated with a key that serves as identifier to access the blob. In the simplest form, a key-value store pro ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Cloud storage services often provide a key-value store (KVS) functionality, an object-based interface for accessing a collection of unstructured data items or blobs. Every blob is associated with a key that serves as identifier to access the blob. In the simplest form, a key-value store provides only methods for writing and reading an entire blob, for removing blobs, and for listing all defined keys. On the other hand, many existing schemes for replicating data with the goal of enhancing resilience (e.g., based on quorum systems) associate logical timestamps with the stored values, in order to distinguish multiple versions of the same data item. This paper uses the consensus number of a shared storage abstraction as a measure for its power to facilitate the implementation of data replication. It is demonstrated that a KVS is a very simple primitive, not different from read/write registers in this sense, and that a replica capable of the typical operations on timestamped data is fundamentally more powerful than a KVS. Hence, data replication schemes over storage providers with a KVS interface are inherently more difficult to realize than replication schemes over providers with richer interfaces. Keywords-Data storage; resilience; replication; quorum systems; wait-freedom; consensus number. I.
CosTLO: Cost-Effective Redundancy for Lower Latency Variance on Cloud Storage Services
- In NSDI
, 2015
"... Abstract—We present CosTLO, a system that reduces the high latency variance associated with cloud storage services by augmenting GET/PUT requests issued by end-hosts with redundant requests, so that the earliest response can be con-sidered. To reduce the cost overhead imposed by redun-dancy, unlike ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—We present CosTLO, a system that reduces the high latency variance associated with cloud storage services by augmenting GET/PUT requests issued by end-hosts with redundant requests, so that the earliest response can be con-sidered. To reduce the cost overhead imposed by redun-dancy, unlike prior efforts that have used this approach, CosTLO combines the use of multiple forms of redundancy. Since this results in a large number of configurations in which CosTLO can issue redundant requests, we conduct a comprehensive measurement study on S3 and Azure to identify the configurations that are viable in practice. In-formed by this study, we design CosTLO to satisfy any ap-plication’s goals for latency variance by 1) estimating the latency variance offered by any particular configuration, 2) efficiently searching through the configuration space to se-lect a cost-effective configuration among the ones that can offer the desired latency variance, and 3) preserving data consistency despite CosTLO’s use of redundant requests. We show that, for the median PlanetLab node, CosTLO can halve the latency variance associated with fetching content from Amazon S3, with only a 25 % increase in cost. 1