Results 1 - 10
of
34
Satori: Enlightened Page Sharing
- In Proceedings of the USENIX Annual Technical Conference
, 2009
"... We introduce Satori, an efficient and effective system for sharing memory in virtualised systems. Satori uses enlightenments in guest operating systems to detect sharing opportunities and manage the surplus memory that results from sharing. Our approach has three key benefits over existing systems: ..."
Abstract
-
Cited by 41 (0 self)
- Add to MetaCart
(Show Context)
We introduce Satori, an efficient and effective system for sharing memory in virtualised systems. Satori uses enlightenments in guest operating systems to detect sharing opportunities and manage the surplus memory that results from sharing. Our approach has three key benefits over existing systems: it is better able to detect short-lived sharing opportunities, it is efficient and incurs negligible overhead, and it maintains performance isolation between virtual machines. We present Satori in terms of hypervisor-agnostic design decisions, and also discuss our implementation for the Xen virtual machine monitor. In our evaluation, we show that Satori quickly exploits up to 94% of the maximum possible sharing with insignificant performance overhead. Furthermore, we demonstrate workloads where the additional memory improves macrobenchmark performance by a factor of two. 1
A distributed self-learning approach for elastic provisioning of virtualized cloud resources
- In Proc. IEEE/ACM Int’l Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems (MASCOTS
, 2011
"... Abstract—Although cloud computing has gained sufficient popularity recently, there are still some key impediments to enterprise adoption. Cloud management is one of the top challenges. The ability of on-the-fly partitioning hardware re-sources into virtual machine(VM) instances facilitates elastic c ..."
Abstract
-
Cited by 15 (10 self)
- Add to MetaCart
(Show Context)
Abstract—Although cloud computing has gained sufficient popularity recently, there are still some key impediments to enterprise adoption. Cloud management is one of the top challenges. The ability of on-the-fly partitioning hardware re-sources into virtual machine(VM) instances facilitates elastic computing environment to users. But the extra layer of resource virtualization poses challenges on effective cloud management. The factors of time-varying user demand, complicated interplay between co-hosted VMs and the arbitrary deployment of multi-tier applications make it difficult for administrators to plan good VM configurations. In this paper, we propose a distributed learning mechanism that facilitates self-adaptive virtual machines resource provisioning. We treat cloud resource allocation as a distributed learning task, in which each VM being a highly autonomous agent submits resource requests according to its own benefit. The mechanism evaluates the requests and replies with feedbacks. We develop a reinforcement learning algorithm with a highly efficient representation of experiences as the heart of the VM side learning engine. We prototype the mechanism and the distributed learning algorithm in an iBalloon system. Experiment results on an Xen-based cloud testbed demonstrate the effectiveness of iBalloon. The distributed VM agents are able to reach near-optimal configuration decisions in 7 iteration steps at no more than 5 % performance cost. Most importantly, iBalloon shows good scalability on resource allocation by scaling to 128 correlated VMs. I.
Fast and spaceefficient virtual machine checkpointing
- In Proceedings of the 7th ACM SIGPLAN/SIGOPS international conference on Virtual execution environments, VEE ’11
, 2011
"... Checkpointing, i.e., recording the volatile state of a virtual machine (VM) running as a guest in a virtual machine monitor (VMM) for later restoration, includes storing the memory available to the VM. Typically, a full image of the VM’s memory along with processor and device states are recorded. Wi ..."
Abstract
-
Cited by 12 (3 self)
- Add to MetaCart
(Show Context)
Checkpointing, i.e., recording the volatile state of a virtual machine (VM) running as a guest in a virtual machine monitor (VMM) for later restoration, includes storing the memory available to the VM. Typically, a full image of the VM’s memory along with processor and device states are recorded. With guest memory sizes of up to several gigabytes, the size of the checkpoint images becomes more and more of a concern. In this work we present a technique for fast and space-efficient checkpointing of virtual machines. In contrast to existing methods, our technique eliminates redundant data and stores only a subset of the VM’s memory pages. Our technique transparently tracks I/O operations of the guest to external storage and maintains a list of memory pages whose contents are duplicated on non-volatile stor-age. At a checkpoint, these pages are excluded from the checkpoint
OSv—optimizing the operating system for virtual machines
- In Proc. USENIX Annual Technical Conference (ATC) (Philadelphia, PA, June 2014), USENIX Association
"... Virtual machines in the cloud typically run existing general-purpose operating systems such as Linux. We notice that the cloud’s hypervisor already provides some features, such as isolation and hardware abstraction, which are duplicated by traditional operating systems, and that this duplication com ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
Virtual machines in the cloud typically run existing general-purpose operating systems such as Linux. We notice that the cloud’s hypervisor already provides some features, such as isolation and hardware abstraction, which are duplicated by traditional operating systems, and that this duplication comes at a cost. We present the design and implementation of OSv, a new guest operating system designed specifically for running a single application on a virtual machine in the cloud. It addresses the duplication issues by using a low-overhead library-OS-like design. It runs existing appli-cations written for Linux, as well as new applications written for OSv. We demonstrate that OSv is able to effi-ciently run a variety of existing applications. We demon-strate its sub-second boot time, small OS image and how it makes more memory available to the application. For unmodified network-intensive applications, we demon-strate up to 25 % increase in throughput and 47 % de-crease in latency. By using non-POSIX network APIs, we can further improve performance and demonstrate a 290 % increase in Memcached throughput. 1
Applications know best: Performance-driven memory overcommit with ginkgo
- In IEEE Conf. on Cloud Computing Technology and Science (CloudCom
, 2011
"... Abstract—Memory overcommitment enables cloud providers to host more virtual machines on a single physical server, exploiting spare CPU and I/O capacity when physical memory becomes the bottleneck for virtual machine deployment. However, overcommiting memory can also cause noticeable application perf ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Memory overcommitment enables cloud providers to host more virtual machines on a single physical server, exploiting spare CPU and I/O capacity when physical memory becomes the bottleneck for virtual machine deployment. However, overcommiting memory can also cause noticeable application performance degradation. We present Ginkgo, a policy frame-work for overcomitting memory in an informed and automated fashion. By directly correlating application-level performance to memory, Ginkgo automates the redistribution of scarce memory across all virtual machines, satisfying performance and capacity constraints. Ginkgo also achieves memory gains for traditionally fixed-size Java applications by coordinating the redistribution of available memory with the activities of the Java Virtual Machine heap. When compared to a non-overcommited system, Ginkgo runs the DayTrader 2.0 and SPECWeb 2009 benchmarks with the same number of virtual machines while saving up to 73% (50 % omitting free space) of a physical server’s memory while keeping application performance degradation within 7%. I.
Improving data center resource management, deployment, and availability with virtualization
, 2009
"... The increasing demand for storage and computation has driven the growth of large data centers–the massive server farms that run many of today’s Internet and business applications. A data center can comprise many thousands of servers and can use as much energy as a small city. The massive amounts of ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
The increasing demand for storage and computation has driven the growth of large data centers–the massive server farms that run many of today’s Internet and business applications. A data center can comprise many thousands of servers and can use as much energy as a small city. The massive amounts of computation power required to drive these systems results in many challenging and interesting distributed systems and resource management problems. In this thesis I investigate challenges related to data centers, with a particular emphasis on how new virtualization technologies can be used to simplify deployment, improve resource efficiency, and reduce the cost of reliability. I first study problems that relate the initial capacity planning required when deploying applications into a virtualized data center. I demonstrate how models iv of virtualization overheads can be utilized to accurately predict the resource needs of virtualized applications, allowing them to be smoothly transitioned into a data center. I next study how memory similarity can be used to guide placement when
Application Level Ballooning for Efficient Server Consolidation
"... Systems software like databases and language runtimes typically manage memory themselves to exploit application knowledge unavailable to the OS. Traditionally deployed on dedicated machines, they are designed to be statically configured with memory sufficient for peak load. In virtualization scenari ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Systems software like databases and language runtimes typically manage memory themselves to exploit application knowledge unavailable to the OS. Traditionally deployed on dedicated machines, they are designed to be statically configured with memory sufficient for peak load. In virtualization scenarios (cloud computing, server consolidation), however, static peak provisioning of RAM to applications dramatically reduces the efficiency and cost-saving benefits of virtualization. Unfortunately, existing memory “ballooning” techniques used to dynamically reallocate physical memory between VMs badly impact the performance of applications which manage their own memory. We address this problem by extending ballooning to applications (here, a database engine and Java runtime) so that memory can be efficiently and effectively moved between virtualized instances as the demands of each change over time. The results are significantly lower memory requirements to provide the same performance guarantees to a collocated set of VM running such applications, with minimal overhead or intrusive changes to application code. 1.
A quantitative analysis of performance of shared service systems with multiple resource contention
"... IT service providers employ server virtualization as a ba-sic building block of their platforms to increase the cost ef-fectiveness. The economic benefits from server virtualiza-tion come from higher resource utilization, reduced mainte-nance and operational costs including energy consumption. Howev ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
IT service providers employ server virtualization as a ba-sic building block of their platforms to increase the cost ef-fectiveness. The economic benefits from server virtualiza-tion come from higher resource utilization, reduced mainte-nance and operational costs including energy consumption. However, those benefits require efficient assignments of vir-tual servers or jobs to a limited number of physical hosts. The primary measures of evaluating efficiency of an assign-ment are that all the system resources are utilized effectively and the performance of each virtual machine is consistent with the desired performance bounds. While satisfying such SLA/performance bounds is essential for many classes of applications, the interference among the virtual machines makes such assurance admittedly difficult. This is primarily
Rethink the virtual machine template
- in ACM SIGPLAN Notices
"... Server virtualization technology facilitates the creation of an elastic computing infrastructure on demand. There are cloud applications like server-based computing and virtual desktop that concern startup latency and require impromptu requests for VM creation in a real-time manner. Conventional tem ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Server virtualization technology facilitates the creation of an elastic computing infrastructure on demand. There are cloud applications like server-based computing and virtual desktop that concern startup latency and require impromptu requests for VM creation in a real-time manner. Conventional template-based VM creation is a time consuming process and lacks flexibility for the deployment of statefull VMs. In this paper, we present an abstraction of VM substrate to represent generic VM instances in miniature. Unlike templates that are stored as an image file in disk, VM substrates are docked in memory in a designated VM pool. They can be activated into statefull VMs without machine booting and application initialization. The abstraction leverages an arrange of techniques, including VM miniaturization, generalization, clone and migration, storage copy-on-write, and on-the-fly resource configuration, for rapid deployment of VMs and VM clusters on demand. We implement a prototype on a Xen platform and show that a server with typical configuration of TB disk and GB memory can accommodate more substrates in memory than templates in disk and statefull VMs can be created from the same or different substrates and deployed on to the same or different physical hosts in a cluster without causing any configuration conflicts. Experimental results show that general purpose VMs or a VM cluster for parallel computing can be deployed in a few seconds. We demonstrate the usage of VM substrates in a mobile gaming application.
Mortar: filling the gaps in data center memory
- In Proceedings of the 10th ACM SIGPLAN/SIGOPS international conference on Virtual execution environments
, 2014
"... Data center servers are typically overprovisioned, leaving spare memory and CPU capacity idle to handle unpre-dictable workload bursts by the virtual machines running on them. While this allows for fast hotspot mitigation, it is also wasteful. Unfortunately, making use of spare capacity without impa ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Data center servers are typically overprovisioned, leaving spare memory and CPU capacity idle to handle unpre-dictable workload bursts by the virtual machines running on them. While this allows for fast hotspot mitigation, it is also wasteful. Unfortunately, making use of spare capacity without impacting active applications is particularly difficult for memory since it typically must be allocated in coarse chunks over long timescales. In this work we propose re-purposing the poorly utilized memory in a data center to store a volatile data store that is managed by the hypervisor. We present two uses for our Mortar framework: as a cache for prefetching disk blocks, and as an application-level dis-tributed cache that follows the memcached protocol. Both prototypes use the framework to ask the hypervisor to store useful, but recoverable data within its free memory pool. This allows the hypervisor to control eviction policies and prioritize access to the cache. We demonstrate the benefits of our prototypes using realistic web applications and disk benchmarks, as well as memory traces gathered from live servers in our university’s IT department. By expanding and contracting the data store size based on the free memory available, Mortar improves average response time of a web application by up to 35 % compared to a fixed size mem-cached deployment, and improves overall video streaming performance by 45 % through prefetching.