Results 1 - 10
of
31
A Low-bandwidth Network File System
, 2001
"... This paper presents LBFS, a network file system designed for low bandwidth networks. LBFS exploits similarities between files or versions of the same file to save bandwidth. It avoids sending data over the network when the same data can already be found in the server's file system or the client ..."
Abstract
-
Cited by 394 (3 self)
- Add to MetaCart
(Show Context)
This paper presents LBFS, a network file system designed for low bandwidth networks. LBFS exploits similarities between files or versions of the same file to save bandwidth. It avoids sending data over the network when the same data can already be found in the server's file system or the client's cache. Using this technique, LBFS achieves up to two orders of magnitude reduction in bandwidth utilization on common workloads, compared to traditional network file systems
The evolution of Coda
, 2002
"... Failure-resilient, scalable, and secure read-write access to shared information by mobile and static users over wireless and wired networks is a fundamental computing challenge. In this article, we describe how the Coda file system has evolved to meet this challenge through the development of mechan ..."
Abstract
-
Cited by 85 (20 self)
- Add to MetaCart
Failure-resilient, scalable, and secure read-write access to shared information by mobile and static users over wireless and wired networks is a fundamental computing challenge. In this article, we describe how the Coda file system has evolved to meet this challenge through the development of mechanisms for server replication, disconnected operation, adaptive use of weak connectivity, isolation-only transactions, translucent caching, and opportunistic exploitation of hardware surrogates. For each mechanism, the article explains how usage experience with it led to the insights for another mechanism. It also shows how Coda has been influenced by the work of other researchers and by industry. The article closes with a discussion of the technical and nontechnical lessons that can be learned from the evolution of the system.
Opportunistic Use of Content Addressable Storage for Distributed File Systems
- IN PROCEEDINGS OF THE 2003 USENIX ANNUAL TECHNICAL CONFERENCE
, 2003
"... Motivated by the prospect of readily available Content Addressable Storage (CAS), we introduce the concept of file recipes. A file's recipe is a first-class file system object listing content hashes that describe the data blocks composing the file. File recipes provide applications with instruc ..."
Abstract
-
Cited by 68 (13 self)
- Add to MetaCart
Motivated by the prospect of readily available Content Addressable Storage (CAS), we introduce the concept of file recipes. A file's recipe is a first-class file system object listing content hashes that describe the data blocks composing the file. File recipes provide applications with instructions for reconstructing the original file from available CAS data blocks. We describe one such application of recipes, the CASPER distributed file system. A CASPER client opportunistically fetches blocks from nearby CAS providers to improve its performance when the connection to a file server traverses a low-bandwidth path. We use measurements of our prototype to evaluate its performance under varying network conditions. Our results demonstrate significant improvements in execution times of applications that use a network file system. We conclude by describing fuzzy block matching, a promising technique for using approximately matching blocks on CAS providers to reconstitute the exact desired contents of a file at a client.
A nine year study of file system and storage benchmarking
- ACM Transactions on Storage
, 2008
"... Benchmarking is critical when evaluating performance, but is especially difficult for file and storage systems. Complex interactions between I/O devices, caches, kernel daemons, and other OS components result in behavior that is rather difficult to analyze. Moreover, systems have different features ..."
Abstract
-
Cited by 55 (8 self)
- Add to MetaCart
Benchmarking is critical when evaluating performance, but is especially difficult for file and storage systems. Complex interactions between I/O devices, caches, kernel daemons, and other OS components result in behavior that is rather difficult to analyze. Moreover, systems have different features and optimizations, so no single benchmark is always suitable. The large variety of workloads that these systems experience in the real world also adds to this difficulty. In this article we survey 415 file system and storage benchmarks from 106 recent papers. We found that most popular benchmarks are flawed and many research papers do not provide a clear indication of true performance. We provide guidelines that we hope will improve future performance evaluations. To show how some widely used benchmarks can conceal or overemphasize overheads, we conducted a set of experiments. As a specific example, slowing down read operations on ext2 by a factor of 32 resulted in only a 2–5 % wall-clock slowdown in a popular compile benchmark. Finally, we discuss future work to improve file system and storage benchmarking.
Storage Tradeoffs in a Collaborative Backup Service for Mobile Devices
, 2006
"... Mobile devices are increasingly relied on but are used in contexts that put them at risk of physical dam-age, loss or theft. We consider a fault-tolerance ap-proach that exploits spontaneous interactions to imple-ment a collaborative backup service. We define the con-straints implied by the mobile e ..."
Abstract
-
Cited by 12 (3 self)
- Add to MetaCart
(Show Context)
Mobile devices are increasingly relied on but are used in contexts that put them at risk of physical dam-age, loss or theft. We consider a fault-tolerance ap-proach that exploits spontaneous interactions to imple-ment a collaborative backup service. We define the con-straints implied by the mobile environment,analyze how they translate into the storage layer of such a backup system and examine various design options. The paper concludes with a presentation of our prototype imple-mentation of the storage layer, an evaluation of the im-pact of several compression methods,and directions for future work. 1.
Collaboration and multimedia authoring on mobile devices
- in Proc. of the First Intl. Conf. on MobiSys
, 2003
"... Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. ..."
Abstract
-
Cited by 12 (1 self)
- Add to MetaCart
(Show Context)
Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein.
Caching trust rather than content
- Operating System Review
, 2000
"... Caching, one of the oldest ideas in computer science, often improves performance and sometimes improves availability [1, 3]. Previous uses of caching have focused on data content. It is the presence of a local copy of data that reduces access latency and masks server or network failures. This positi ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
(Show Context)
Caching, one of the oldest ideas in computer science, often improves performance and sometimes improves availability [1, 3]. Previous uses of caching have focused on data content. It is the presence of a local copy of data that reduces access latency and masks server or network failures. This position paper puts forth the idea that it can sometimes be useful to merely cache knowledge sufficient to recognize valid data. In other words, we do not have a local copy of a data item, but possess a substitute that allows us to verify the content of that item if it is offered to us by an untrusted source. We refer to this concept as caching trust. Mobile computing is a champion application domain for this concept. Wearable and handheld computers are constantly under pressure to be smaller and lighter. However, the potential volume of data that is accessible to such devices over a wireless network keeps growing. Something has to give. In this case, it is the assumption that all data of potential interest can be hoarded on the mobile client [ 1, 2, 6]. In other words, such clients have to be prepared to cope with cache misses during normal use. If they are able to cache trust, then any untrusted site in the fixed infrastructure can be used to stage data for servicing cache misses-- one does not have to go back to a distant server, nor does one have to compromise security. The following scenario explores this in more detail. 2. Example Scenario An engineer with a wearable computer has to visit a distant site for troubleshooting. Because of limited client cache
Low-Bandwidth VM Migration via Opportunistic Replay
"... Virtual machine (VM) migration has been proposed as a building block for mobile computing. An important challenge for VM migration is to optimize the transfer of large amounts of disk and memory state. We propose a solution based on the opportunistic replay of user interactions with applications at ..."
Abstract
-
Cited by 9 (1 self)
- Add to MetaCart
(Show Context)
Virtual machine (VM) migration has been proposed as a building block for mobile computing. An important challenge for VM migration is to optimize the transfer of large amounts of disk and memory state. We propose a solution based on the opportunistic replay of user interactions with applications at the GUI level. Whereas this approach results in very small replay logs that economize network utilization, replay of user interactions on a VM at the migration target site can result in divergent VM state. Cryptographic hashing techniques are used to identify and transmit only the differences. We discuss the implementation challenges of this approach, and present encouraging results from an early prototype that show savings of up to 80.5 % of bytes transferred.
Middleware Support for Reconciling Client Updates and Data Transcoding
- In International Conference on Mobile Systems, Applications, and Services (MobiSys
, 2004
"... In mobile Internet applications, data can be transcoded, updated, and transferred across heterogenous clients. The problem then arises where updates made in the context of an initial transcoding results in content too stringently transcoded for subsequent clients, thereby causing loss of semantic va ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
In mobile Internet applications, data can be transcoded, updated, and transferred across heterogenous clients. The problem then arises where updates made in the context of an initial transcoding results in content too stringently transcoded for subsequent clients, thereby causing loss of semantic value. We solve this problem by suggesting that the updates themselves can be transformed so that they can be applied directly to the original data instead of to the transcoded data; this approach allows the data to preserve as much semantic value as possible across all heterogeneous clients without unnecessary transcoding artifacts. We define reconciliation rules that can govern the interaction between client updates and transcoding, demonstrate a complete middleware architecture that supports our methodology, and provide two case studies using content-transferring applications. We show that our resulting middleware system executes our reconciliation approach with acceptable latency (under 5 seconds for 200 kbytes of layered content), good scalability, and well-organised modularity.