Results 1 - 10
of
41
I/O Deduplication: Utilizing Content Similarity to Improve I/O Performance
"... Duplication of data in storage systems is becoming increasingly common. We introduce I/O Deduplication, a storage optimization that utilizes content similarity for improving I/O performance by eliminating I/O operations and reducing the mechanical delays during I/O operations. I/O Deduplication cons ..."
Abstract
-
Cited by 48 (5 self)
- Add to MetaCart
Duplication of data in storage systems is becoming increasingly common. We introduce I/O Deduplication, a storage optimization that utilizes content similarity for improving I/O performance by eliminating I/O operations and reducing the mechanical delays during I/O operations. I/O Deduplication consists of three main techniques: content-based caching, dynamic replica retrieval, and selective duplication. Each of these techniques is motivated by our observations with I/O workload traces obtained from actively-used production storage systems, all of which revealed surprisingly high levels of content similarity for both stored and accessed data. Evaluation of a prototype implementation using these workloads showed an overall improvement in disk I/O performance of 28 to 47 % across these workloads. Further breakdown also showed that each of the three techniques contributed significantly to the overall performance improvement.
On content-centric router design and implications
, 2010
"... In this paper, we investigate a sample line-speed contentcentric router’s design, its resources and its usage scenarios. We specifically take a closer look at one of the suggested functionalities for these routers, the content store. The design is targeted at pull-based environments, where content c ..."
Abstract
-
Cited by 39 (4 self)
- Add to MetaCart
(Show Context)
In this paper, we investigate a sample line-speed contentcentric router’s design, its resources and its usage scenarios. We specifically take a closer look at one of the suggested functionalities for these routers, the content store. The design is targeted at pull-based environments, where content can be pulled from the network by any interested entity. We discuss the interaction between the pull-based protocols and the content-centric router. We also provide some basic feasibility metrics, discussing some applicability aspects for such routers. 1.
Breaking up is hard to do: Security and functionality in a commodity hypervisor
- In Proc. ACM Symposium on OperatingSystems Principles,2011
"... Cloud computing uses virtualization to lease small slices of largescale datacenter facilities to individual paying customers. These multi-tenant environments, on which numerous large and popular web-based applications run today, are founded on the belief that the virtualization platform is sufficien ..."
Abstract
-
Cited by 36 (0 self)
- Add to MetaCart
(Show Context)
Cloud computing uses virtualization to lease small slices of largescale datacenter facilities to individual paying customers. These multi-tenant environments, on which numerous large and popular web-based applications run today, are founded on the belief that the virtualization platform is sufficiently secure to prevent breaches of isolation between different users who are co-located on the same host. Hypervisors are believed to be trustworthy in this role because of their small size and narrow interfaces. We observe that despite the modest footprint of the hypervisor itself, these platforms have a large aggregate trusted computing base (TCB) that includes a monolithic control VM with numerous interfaces exposed to VMs. We present Xoar, a modified version of Xen that retrofits the modularity and isolation principles used in microkernels onto a mature virtualization platform. Xoar breaks the control VM into single-purpose components called service VMs. We show that this componentized abstraction brings a number of benefits: sharing of service components by guests is configurable and auditable, making exposure to risk explicit, and access to the hypervisor is restricted to the least privilege required for each component. Microrebooting components at configurable frequencies reduces the temporal attack surface of individual components. Our approach incurs little performance overhead, and does not require functionality to be sacrificed or components to be rewritten from scratch. 1.
Live Gang Migration of Virtual Machines
"... This paper addresses the problem of simultaneously migrating a group of co-located and live virtual machines (VMs), i.e, VMs executing on the same physical machine. We refer to such a mass simultaneous migration of active VMs as live gang migration. Cluster administrators may often need to perform l ..."
Abstract
-
Cited by 22 (1 self)
- Add to MetaCart
(Show Context)
This paper addresses the problem of simultaneously migrating a group of co-located and live virtual machines (VMs), i.e, VMs executing on the same physical machine. We refer to such a mass simultaneous migration of active VMs as live gang migration. Cluster administrators may often need to perform live gang migration for load balancing, system maintenance, or power savings. Application performance requirements may dictate that the total migration time, network traffic overhead, and service downtime, be kept minimal when migrating multiple VMs. State-of-the-art live migration techniques optimize the migration of a single VM. In this paper, we optimize the simultaneous live migration of multiple co-located VMs. We present the design, implementation, and evaluation of a de-duplication based approach to perform concurrent live migration of co-located VMs. Our approach transmits memory content that is identical across VMs only once during migration to significantly reduce both the total migration time and network traffic. Using the QEMU/KVM platform, we detail a proof-of-concept prototype implementation of two types of de-duplication strategies (at page level and sub-page level) and a differential compression approach to exploit content similarity across VMs. Evaluations over Gigabit Ethernet with various types of VM workloads demonstrate that our prototype for live gang migration can achieve significant reductions in both network traffic and total migration time. Categories andSubjectDescriptors
Whispers in the Hyper-space: High-speed Covert Channel Attacks in the Cloud
"... Information security and privacy in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such a ..."
Abstract
-
Cited by 16 (1 self)
- Add to MetaCart
(Show Context)
Information security and privacy in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such as cache covert channels) to exfiltrate sensitive information of victims on the same physical system. However, on virtualized x86 systems, covert channel attacks have not yet proven to be practical, and thus the threat is widely considered a “potential risk”. In this paper, we present a novel covert channel attack that is capable of high-bandwidth and reliable data transmission in the cloud. We first study the application of existing cache channel techniques in a virtualized environment, and uncover their major insufficiency and difficulties. We then overcome these obstacles by (1) redesigning a pure timing-based data transmission scheme, and (2) exploiting the memory bus as a high-bandwidth covert channel medium. We further design and implement a robust communication protocol, and demonstrate realistic covert channel attacks on various virtualized x86 systems. Our experiments show that covert channels do pose serious threats to information security in the cloud. Finally, we discuss our insights on covert channel mitigation in virtualized environments. 1
Opportunistic Flooding to Improve TCP Transmit Performance in Virtualized Clouds
, 2011
"... Virtualization is a key technology that powers cloud computing platforms such as Amazon EC2. Virtual machine (VM) consolidation, where multiple VMs share a physical host, has seen rapid adoption in practice with increasingly large number of VMs per machine and per CPU core. Our investigations, howev ..."
Abstract
-
Cited by 15 (7 self)
- Add to MetaCart
(Show Context)
Virtualization is a key technology that powers cloud computing platforms such as Amazon EC2. Virtual machine (VM) consolidation, where multiple VMs share a physical host, has seen rapid adoption in practice with increasingly large number of VMs per machine and per CPU core. Our investigations, however, suggest that the increasing degree of VM consolidation has serious negative effects on the VMs ’ TCP transport performance. As multiple VMs share a given CPU, the scheduling latencies, which can be in the order of tens of milliseconds, substantially increase the typically sub-millisecond round-trip times (RTTs) for TCP connections in a datacenter, causing significant degradation in throughput. In this paper, we propose a light-weight solution called vFlood that (a) allows a TCP sender VM to opportunistically flood the driver domain in the same host, and (b) offloads the VM’s TCP congestion control function to the driver domain in order to mask the effects of VM consolidation. Our evaluation of a vFlood prototype on Xen suggests that vFlood substantially improves TCP transmit throughput with minimal per-packet CPU overhead. Further, our applicationlevel evaluation using Apache Olio, a web 2.0 cloud application, indicates a 33 % improvement in the number of operations per second.
Shrinker: Improving live migration of virtual clusters over wans with distributed data deduplication and content-based addressing
- In Proceedings of the 17th International Conference on Parallel Computing (Euro-Par
, 2011
"... Abstract. Live virtual machine migration is a powerful feature of virtu-alization technologies. It enables efficient load balancing, reduces energy consumption through dynamic consolidation, and makes infrastructure maintenance transparent to users. While live migration is available across wide area ..."
Abstract
-
Cited by 13 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Live virtual machine migration is a powerful feature of virtu-alization technologies. It enables efficient load balancing, reduces energy consumption through dynamic consolidation, and makes infrastructure maintenance transparent to users. While live migration is available across wide area networks with state of the art systems, it remains expensive to use because of the large amounts of data to transfer, especially when migrating virtual clusters rather than single virtual machine instances. As evidenced by previous research, virtual machines running identical or similar operating systems have significant portions of their memory and storage containing identical data. We propose Shrinker, a live vir-tual machine migration system leveraging this common data to improve live virtual cluster migration between data centers interconnected by wide area networks. Shrinker detects memory pages and disk blocks du-plicated in a virtual cluster to avoid sending multiple times the same content over WAN links. Virtual machine data is retrieved in the desti-nation site with distributed content-based addressing. We implemented a prototype of Shrinker in the KVM hypervisor and present a perfor-mance evaluation in a distributed environment. Experiments show that it reduces both total data transferred and total migration time.
Fast and spaceefficient virtual machine checkpointing
- In Proceedings of the 7th ACM SIGPLAN/SIGOPS international conference on Virtual execution environments, VEE ’11
, 2011
"... Checkpointing, i.e., recording the volatile state of a virtual machine (VM) running as a guest in a virtual machine monitor (VMM) for later restoration, includes storing the memory available to the VM. Typically, a full image of the VM’s memory along with processor and device states are recorded. Wi ..."
Abstract
-
Cited by 12 (3 self)
- Add to MetaCart
(Show Context)
Checkpointing, i.e., recording the volatile state of a virtual machine (VM) running as a guest in a virtual machine monitor (VMM) for later restoration, includes storing the memory available to the VM. Typically, a full image of the VM’s memory along with processor and device states are recorded. With guest memory sizes of up to several gigabytes, the size of the checkpoint images becomes more and more of a concern. In this work we present a technique for fast and space-efficient checkpointing of virtual machines. In contrast to existing methods, our technique eliminates redundant data and stores only a subset of the VM’s memory pages. Our technique transparently tracks I/O operations of the guest to external storage and maintains a list of memory pages whose contents are duplicated on non-volatile stor-age. At a checkpoint, these pages are excluded from the checkpoint
Kaleidoscope: Cloud Micro-elasticity via VM State Coloring,”
- in Proceedings of the Sixth Conference on Computer Systems, ser. EuroSys ’11,
, 2011
"... Abstract We introduce cloud micro-elasticity, a new model for cloud Virtual Machine (VM) allocation and management. Current cloud users over-provision long-lived VMs with large memory footprints to better absorb load spikes, and to conserve performance-sensitive caches. Instead, we achieve elastici ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
Abstract We introduce cloud micro-elasticity, a new model for cloud Virtual Machine (VM) allocation and management. Current cloud users over-provision long-lived VMs with large memory footprints to better absorb load spikes, and to conserve performance-sensitive caches. Instead, we achieve elasticity by swiftly cloning VMs into many transient, short-lived, fractional workers to multiplex physical resources at a much finer granularity. The memory of a micro-elastic clone is a logical replica of the parent VM state, including caches, yet its footprint is proportional to the workload, and often a fraction of the nominal maximum. We enable micro-elasticity through a novel technique dubbed VM state coloring, which classifies VM memory into sets of semantically-related regions, and optimizes the propagation, allocation and deduplication of these regions. Using coloring, we build Kaleidoscope and empirically demonstrate its ability to create micro-elastic cloned servers. We model the impact of microelasticity on a demand dataset from AT&T's cloud, and show that fine-grained multiplexing yields infrastructure reductions of 30% relative to state-of-the art techniques for managing elastic clouds.
CloudNet: A Platform for Optimized WAN Migration of Virtual Machines
"... Cloud computing platforms are growing from clusters of machines within a data center to networks of data centers with resources spread across the globe. Virtual machine migration within the LAN has changed the scale of resource management from allocating resources on a single server to manipulating ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
Cloud computing platforms are growing from clusters of machines within a data center to networks of data centers with resources spread across the globe. Virtual machine migration within the LAN has changed the scale of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration to likewise transform the scope of provisioning from a single data center to multiple data centers spread across the country or around the world. In this paper we propose a cloud computing platform linked with a VPN based network infrastructure that provides seamless connectivity between enterprise and data center sites, as well as support for live WAN migration of virtual machines. We describe a set of optimizations that minimize the cost of transferring persistent storage and moving virtual machine memory during migrations over low bandwidth, high latency Internet links. Our evaluation on both a local testbed and across two real data centers demonstrates that these improvements can reduce total migration and pause time by over 30%. During simultaneous migrations of four VMs between Texas and Illinois, CloudNet’s optimizations reduce memory migration time by 65 % and lower bandwidth consumption for the storage and memory transfer by 20GB, a 57 % reduction. 1