BibTeX
@MISC{Wang_,
author = {Boyang Wang and Baochun Li and Hui Li},
title = {},
year = {}
}
OpenURL
Abstract
ABSTRACT: Cloud computing is an arising technology which provide various services through internet. User can remotelystored their data on the cloud. And enjoying on demand high quality cloud applications without the burden of local storage and maintenance. But the user do not fell protected because data is stored at cloud required security and integrity. The data integrity verification is done by Third party auditor (TPA),who check the integrity of data periodically on behalf of the client. Many mechanism allow data owner as well as public verifier to perform integrity checking without retrieving entire data from cloud, which is called as public auditing.TPA verify the integrity of shared data in several auditing tasks would be very inefficient so that batch auditing mechanism is used. And also support for dynamic operations on data blocks i.e. data update, delete and append. KEYWORDS:Cloud Computing, Privacy preserving, Security, Integrity, Data storage,TPA. I.INTRODUCTION With cloud computing , cloud service providers offers users to access and to share resources cost. Cloud storage services share user's data with other users in group. data sharing is standard feature in most cloud storage offers Dropbox, iCloud and Google Drive. In cloud storage , integrity of data , is related with exploration and infidelity because data stored in the cloud can easily be lost or corrupted due to human errors and hardware/software failures . The traditional approach for checking data correctness is to retrieve the entire data from the cloud, and then checking the correctness of signatures (e.g., RSA ) or hash values (e.g., MD5)of the entire data is verify data integrity. .The main reason is the size of cloud data which is large . When data have been corrupted in the cloud to verify data integrity, downloading the entire cloud data that cost or even waste users amounts of computation and communication resources. There are two classes of basic scheme MAC Based Solution: It is used to authenticate the data.User upload data blocks and MAC to CS provide its secret key SK to TPA. The TPA will randomly retrieve data blocks and MAC uses secret key to check correctness of stored data on the cloud. There are problems occurs such as 1.Computation and Communication complexity. 2. For verification ,TPA requires knowledge of data blocks. 3.It have additional online burden to users because of limited use and stateful verification. 4.Having limits on data files to be audited as secret keys are fixed. 5.It supports dynamic data as well as static data. 6.After using all possible secret keys to recomputed MAC, the user has to download all the data & republish it on CS. 7.TPA need to maintain & update states for TPA and it is very difficult. HLA Based Solution: 1.It supports efficient public auditing without retrieving data block. It is aggregated and required constant bandwidth. Public auditing is to allow a public verifier as well as a data owner itself without downloading the entire data to efficiently perform integrity checking from the cloud. In these mechanisms, data is divided into many small blocks, where the owner is independently sign each block; and during integrity checking, a random combination of all the blocks instead of the whole data is retrieved.A public verifier could be a data user,who would like to utilize the owner's data through cloud.A public verifier work as a third-party auditor (TPA) to provide expert integrity checking services. Existing public auditing mechanisms is used to verify shared data integrity .But there is a privacy issue introduced in shared data with using existing mechanisms is the leakage of identity privacy to public verifiers. It is difficulty to preserve identity privacy from public verifiers during public auditing, during protecting confidential information. ISSN(Online To solve this kind of privacy issue on shared data .ORUTA is proposed.Oruta is a privacy preserving public auditing mechanism .In oruta ring signature is used to construct homomorphic authenticators because of that public verifier is able to verify the integrity of shared data without retrieving the entire data during the identity of the signer on each block in shared data is kept private from the public verifier.oruta also support for batch auditing.It perform multiple auditing tasks simultaneously and improve the efficiency of verification for multiple auditing tasks.Oruta stands for "One Ring to Rule Them All". II.RELATED WORKS Provable Data Possession at Untrusted Stores(2007) G. Ateniese,R.Burns,R.Urtmola,J.Herring,L.Kissner,Z.Peterson and D.Song introducing provable data possession (PDP)that allows client to stored data at an untrusted server to verify that server possesses the original data without retrieving it.PDP generates probabilistic proofs of possession by sampling random sets of blocks from the server.It reduces I/O costs. The client have a constant amount of metadata to verify the proof .The challenge/response protocol minimizes network communication.It transmits a small and constant amount of data.PDP supports to public databases such as digital libraries,astrobnomy/medical/legal repositories,archives etc. .PDP schemes have drawback is that it works only for static databases PORs: Proofs of Retrievability for Large Files(2007) A.Juels and B.S.Kaliski describes POR which allows a server to convince a client that can be retrieve a file that was previously stored at the sever.POR scheme uses disguised blocks (called sentinels) hidden among regular file blocks in order to detect data modification by the server.The goal of POR is to accomplish these checks without users having to download the files themselves.POR provides quality of service guarantees means a file is retievable within a certain time bound. POR protocol encrypts F and randomly embeds a set of randomly valued check blocks called sentinels .The use of encryption renders the sentinels indistinguishable from other file blocks. The verifier challenges the prover by specifying the positions of a collection of sentinels and asking the prover to return the associated sentinel values.Ifprover has modified or deleted a substantial portion of F,then with high probability it will also have suppressed a number of sentinels Compact Proofs of Retrievability(2008) HovavShacham and Brent Waters focuses on a proof of retrivabilitysystem,in that a data storage center convinces verifier that he is actually storing all of a client's data. The central challenge is to build systems that are both efficient ad provably secure means that it should be possible to extract the client's data from any prover that passes a verification Dynamic Provable Data Possession(2009) Dynamic provable data possession (DPDP), which extends the PDP model to support provable updates on stored data developed by C. Erway, A. Kupcu, C. Papamanthou, and R. Tamassia. Consider a file F consisting of n blocks, it define an update as inserting a new block or modifying an existing block or deleting any block.An update operation describes the most general form of modifications a client may wish to perform on file. DPDP solution is based on variant of authenticated dictionaries, where rank information used to organize dictionary entries. It supports efficient authenticated operations on files at block level such as authenticated insert and delete. Provable storage system enables efficient proofs of a whole file system, enabling verification at different users and same time not having to download the whole data[6]. Privacy Preserving public auditing for data storage security in cloud computing(2010) C. Wang, Q. Wang, K. Ren, and W. Lou describes Privacy preserving public auditing system for data storage security in cloud computing ,where TPA can perform the storage auditing without demanding the local copy of data. Homomorphic authenticator and random masking technique are used to guarantee that TPA would learn any knowledge about the data content stored on the cloud server during the efficient auditing process. It not only eliminates the burden of cloud user fron auditing but also soften the user's fear of their outsourced data leakage.Consider TPA may concurrently handle multiple audit sessions from different users for their outsourced data file,it can extends privacy preserving public auditing protocol into multiuser setting ,where TPA can perform the multiple auditing tasks in a batch manner i.e. simultaneously Aggregate and Verifiably Encrypted Signatures from Bilinear Maps(2003) D. Boneh, C. Gentry, B. Lynn, and H. Shacham introduces An aggregate signatures are useful for reducing the size of certificate chains by aggregating all signatures in the chain.It is useful for reducing message size in secure routing protocols such as SBGP.Aggregate signature provides verifiably encrypted signatures that signature enable the verifier to test that a given ciphertext C I the encryption of a signature on a given message Verifiably encrypted signatures are used in contract signing protocols. It is also used to extend the short signature scheme to give simple ring signatures Ensuring Data storage Security in cloud Computing(2009) To ensure the correctness of user's data in cloud data storage,a effective and flexible distributed scheme with explicit dynamic data support is proposed by C. Wang, Q. Wang, K. Ren, and W. Lou. It including block update,delete and append.It rely on erasure correcting code in the file distribution preparation to provide redundancy parity vectors and guarantee the data dependability.By utilizing the homomophic token with distributed verification of erasure coded data .During the storage correctness verification across distributed servers,it achieves the integration of storage correctness insurance and data error localization.It guarantees the simultaneous identification of the misbehaving servers LT Codes-based Secure and Reliable cloud storage service(2012) N. Cao, S. Yu, Z. Yang, W. Lou, and Y.T. Hou explores the problem of secure and reliable cloud storage with efficiency consideration of both data repair and data retrieval and design a LT codes-based cloud storage service(LTCS)..By utilizing the fast BeliefPropagation decoding algorithm, LTCS provides efficient data retrieval for data users and releases the data owner from the burden of being online by enabling public data integrity check and employing exact repair Proofs of Ownership in Remote Storage Systems(2011) S. Halevi, D. Harnik, B. Pinkas, and A. Shulman-Peleg identify attacks that exploit client side deduplication,allowing an attacker to gain access to arbitrary size files of other users based on a very small hash signatures of files.An attacker knows hash signature of a file can convince the storage service that it owns that file,hence the server lets the attacker download the entire file. To overcome such attacks,proofs of ownership(PoWs)is introduced ,which client efficiently prove to a server that the client holds a file,rather than short information about it Secure and Efficient Proof of storage with Deduplication (2012) Q. Zheng and S. Xu introduces Proof of storage with deduplication or POSD, to fulfil data integrity and duplication simultaneously. POSD scheme is proven secure in the Random Oracle model based on the Computational DiffieHellman assumption Oblivious Outsourced Storage with Delegation (2011) M. Franz, P. Williams, B. Carbunar, S. Katzenbeisser, and R. Sionintroduces,Consider multiple clients want to share data on a server, while hiding all access patterns. Outsourcing private data to untrusted servers has an important challenge.so the solution for this problem is Oblivious RAM (ORAM) techniques. Data owners can delegate rights to external new clients enabling them to privately access portions of the outsourced data served by a curious server.ORAM allows for delegated read or write access while ensuring strong guarantees for the privacy of outsourced data. The server does not learn anything about client access patterns while client do not learn anything more than their delegated rights permit Efficient and Private Access to Outsourced Data(2011) S. D. C. di Vimercati, S. Foresti, S. Paraboschi, G. Pelosi, and P. Samarati exploited For data outsourcing, it presented an indexing technique that proves to be efficient while ensuring content access and pattern confidentiality. The shuffle index have advantages such as first is the underlying structure is B+ trees, which are used in relational DBMSs to support the efficient execution of queries. Second is the possibility for the use of multiple indexes, defined on distinct search keys, over the same collection of data Proofs of Retrievability via Hardness Amplification (2009) YevgeniyDodis, SalilVandan Daniel Wichs develops PORs as an important tool for semi-trusted online archives In a POR,unlike a POK,there is no need to have knowledge of F to the prover or the verifier.Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve.The goal of a POR is to accomplish these checks without users having to download the files themselves.A POR can provide quality of service guarantees,i.e. it show that a file is retrievable within a cetain time bound