Results 1 -
9 of
9
Live Gang Migration of Virtual Machines
"... This paper addresses the problem of simultaneously migrating a group of co-located and live virtual machines (VMs), i.e, VMs executing on the same physical machine. We refer to such a mass simultaneous migration of active VMs as live gang migration. Cluster administrators may often need to perform l ..."
Abstract
-
Cited by 22 (1 self)
- Add to MetaCart
(Show Context)
This paper addresses the problem of simultaneously migrating a group of co-located and live virtual machines (VMs), i.e, VMs executing on the same physical machine. We refer to such a mass simultaneous migration of active VMs as live gang migration. Cluster administrators may often need to perform live gang migration for load balancing, system maintenance, or power savings. Application performance requirements may dictate that the total migration time, network traffic overhead, and service downtime, be kept minimal when migrating multiple VMs. State-of-the-art live migration techniques optimize the migration of a single VM. In this paper, we optimize the simultaneous live migration of multiple co-located VMs. We present the design, implementation, and evaluation of a de-duplication based approach to perform concurrent live migration of co-located VMs. Our approach transmits memory content that is identical across VMs only once during migration to significantly reduce both the total migration time and network traffic. Using the QEMU/KVM platform, we detail a proof-of-concept prototype implementation of two types of de-duplication strategies (at page level and sub-page level) and a differential compression approach to exploit content similarity across VMs. Evaluations over Gigabit Ethernet with various types of VM workloads demonstrate that our prototype for live gang migration can achieve significant reductions in both network traffic and total migration time. Categories andSubjectDescriptors
Reducing Electricity Cost Through Virtual Machine Placement in High Performance Computing Clouds
"... Cloud service providers operate multiple geographically distributed data centers. These data centers consume huge amounts of energy, which translate into high operating costs. Interestingly, the geographical distribution of the data centers provides many opportunities for cost savings. For example, ..."
Abstract
-
Cited by 19 (1 self)
- Add to MetaCart
(Show Context)
Cloud service providers operate multiple geographically distributed data centers. These data centers consume huge amounts of energy, which translate into high operating costs. Interestingly, the geographical distribution of the data centers provides many opportunities for cost savings. For example, the electricity prices and outside temperatures may differ widely across the data centers. This diversity suggests that intelligently placing load may lead to large cost savings. However, aggressively directing load to the cheapest data center may render its cooling infrastructure unable to adjust in time to prevent server overheating. In this paper, we study the impact of load placement policies on cooling and maximum data center temperatures. Based on this study, we propose dynamic load distribution policies that consider all electricity-related costs as well as transient cooling effects. Our evaluation studies the ability of different cooling strategies to handle load spikes, compares the behaviors of our dynamic cost-aware policies to cost-unaware and static policies, and explores the effects of many parameter settings. Among other interesting results, we demonstrate that (1) our policies can provide large cost savings, (2) load migration enables savings in many scenarios, and (3) all electricity-related costs must be considered at the same time for higher and consistent cost savings. 1.
Information-Acquisition-as-a-Service for Cyber-Physical Cloud Computing ∗
"... Data center cloud computing distinguishes computational services such as database transactions and data storage from computational resources such as server farms and disk arrays. Cloud computing enables a software-as-a-service business model where clients may only pay for the service they really nee ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
(Show Context)
Data center cloud computing distinguishes computational services such as database transactions and data storage from computational resources such as server farms and disk arrays. Cloud computing enables a software-as-a-service business model where clients may only pay for the service they really need and providers may fully utilize the resources they actually have. The key enabling technology for cloud computing is virtualization. Recent developments, including our own work on virtualization technology for embedded systems, show that service-oriented computing through virtualization may also have tremendous potential on mobile sensor networks where the emphasis is on information acquisition rather than computation and storage. We propose to study the notion of information-acquisitionas-a-service of mobile sensor networks, instead of server farms, for cyber-physical cloud computing. In particular, we discuss the potential capabilities and design challenges of software abstractions and systems infrastructure for performing information acquisition missions using virtualized versions of aerial vehicles deployed on a fleet of high-performance model helicopters. 1
The Design and Evolution of Live Storage Migration in VMware ESX
"... Live migration enables a running virtual machine to move between two physical hosts with no perceptible interruption in service. This allows customers to avoid costly downtimes associated with hardware maintenance and upgrades, and facilitates automated load-balancing. Consequently, it has become a ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
(Show Context)
Live migration enables a running virtual machine to move between two physical hosts with no perceptible interruption in service. This allows customers to avoid costly downtimes associated with hardware maintenance and upgrades, and facilitates automated load-balancing. Consequently, it has become a critical feature of enterprise class virtual infrastructure. In the past, live migration only moved the memory and device state of a VM, limiting migration to hosts with identical shared storage. Live storage migration overcomes this limitation by enabling the movement of virtual disks across storage elements, thus enabling greater VM mobility, zero downtime maintenance and upgrades of storage elements, and automatic storage load-balancing. We describe the evolution of live storage migration in VMware ESX through three separate architectures, and explore the performance, complexity and functionality trade-offs of each. 1
Elastic application container: A lightweight approach for cloud resource provisioning
, 2012
"... Abstract—Virtual machine (VM) based virtual infrastructure has been adopted widely in cloud computing environment for elastic resource provisioning. Performing resource management using VMs, however, is a heavyweight task. In practice, we have identified two scenarios where VM based resource managem ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Virtual machine (VM) based virtual infrastructure has been adopted widely in cloud computing environment for elastic resource provisioning. Performing resource management using VMs, however, is a heavyweight task. In practice, we have identified two scenarios where VM based resource management is less feasible and less resource-efficient. In this paper, we propose a lightweight resource management model that is called Elastic Application Container (EAC). EAC is a virtual resource unit for delivering better resource efficiency and more scalable cloud applications. We describe the EAC system architecture and components, and also present an algorithm for EAC resource provisioning. We also describe an implementation of the EAC-oriented platform to support multi-tenant cloud use. To evaluate our approach and implementation, we conducted experiments and collected performance data by comparing VM-based and EAC-based resource management with regards to their feasibility and resource-efficiency. The experiment results show that our proposed EAC-based resource management approach outperforms the VM-based approach in terms of feasibility and resource-efficiency.
XvMotion: Unified Virtual Machine Migration over Long Distance
- In Proc. of USENIX ATC
, 2014
"... USENIX. ..."
(Show Context)
Inter-rack Live Migration of Multiple Virtual Machines
"... Within datacenters, often multiple virtual machines (VMs) need to be live migrated simultaneously for various reasons such as maintenance, power savings, and load balancing. Such mass simultaneous live migration of multiple VMs can trigger large data transfers across the core network links and switc ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Within datacenters, often multiple virtual machines (VMs) need to be live migrated simultaneously for various reasons such as maintenance, power savings, and load balancing. Such mass simultaneous live migration of multiple VMs can trigger large data transfers across the core network links and switches, and negatively affect the cluster-wide performance of network-bound applications. In this paper, we present a distributed system for inter-rack live migration (IRLM), i.e., parallel live migration of multiple VMs across racks. The key performance objective of IRLM is to reduce the traffic load on the core network links during mass VM migration through distributed deduplication of VMs ’ memory images. We present an initial prototype of IRLM that migrates multiple QEMU/KVM VMs within a Gigabit Ethernet cluster with 10GigE core links. We also present preliminary evaluation on a small testbed having 6 hosts per rack and 4 VMs per host. Our evaluations show that, compared to the default live migration technique in QEMU/KVM, IRLM reduces the network traffic on core links by up to 44 % and the total migration time by up to 26%. We also demonstrate that network-bound applications experience a smaller degradation during migration using IRLM. Categories andSubjectDescriptors
Cost-Aware Virtual Machine Placement in Cloud Computing Systems
"... Abstract—Cloud service providers operate multiple geographically distributed data centers. These data centers consume huge amounts of energy, which translate into high operating costs. Interestingly, the geographical distribution of the data centers provides many opportunities for cost savings. For ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Cloud service providers operate multiple geographically distributed data centers. These data centers consume huge amounts of energy, which translate into high operating costs. Interestingly, the geographical distribution of the data centers provides many opportunities for cost savings. For example, the electricity prices and outside temperatures may be widely different at the data centers. This diversity suggests that intelligently placing load across the data centers may lead to large cost savings. However, aggressively directing load to the cheapest data center may render its cooling infrastructure unable to adjust in time to prevent server overheating. In this paper, we study the impact of load placement policies on cooling and maximum data center temperatures. Based on this study, we propose load placement policies that consider, not only all electricity-related costs, but also transient cooling effects. Our evaluation studies the ability of different cooling strategies to handle load spikes, demonstrates the behavior of our policies as compared to their simpler counterparts, and explores the effect of many parameter settings. Among other interesting results, we demonstrate that (1) our policies can provide large cost savings, (2) load migration enables savings in many scenarios, and (3) all electricity-related costs must be considered at the same time for higher and consistent cost savings. I.
February 2011EPIC: Platform-as-a-Service Model for Cloud Networking
"... Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interp ..."
Abstract
- Add to MetaCart
(Show Context)
Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called EPIC. Customers can leverage EPIC to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. EPIC primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making EPIC highly efficient. We evaluate an OpenFlowbased prototype of EPIC and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures. 1