Results 1 - 10
of
15
Declarative Automated Cloud Resource Orchestration
"... As cloud computing becomes widely deployed, one of the challenges faced involves the ability to orchestrate a highly complex set of subsystems (compute, storage, network resources) that span large geographic areas serving diverse clients. To ease this process, we present COPE (Cloud Orchestration Po ..."
Abstract
-
Cited by 13 (8 self)
- Add to MetaCart
(Show Context)
As cloud computing becomes widely deployed, one of the challenges faced involves the ability to orchestrate a highly complex set of subsystems (compute, storage, network resources) that span large geographic areas serving diverse clients. To ease this process, we present COPE (Cloud Orchestration Policy Engine), a distributed platform that allows cloud providers to perform declarative automated cloud resource orchestration. In COPE, cloud providers specify system-wide constraints and goals using COPElog, a declarative policy language geared towards specifying distributed constraint optimizations. COPE takes policy specifications and cloud system states as input and then optimizes compute, storage and network resource allocations within the cloud such that provider operational objectives and customer SLAs can be better met. We describe our proposed integration with a cloud orchestration platform, and present initial evaluation results that demonstrate the viability of COPE using production traces from a large hosting company in the US. We further discuss an orchestration scenario that involves geographically distributed data centers, and conclude with an ongoing status of our work. Categories and Subject Descriptors
vTurbo: Accelerating Virtual Machine I/O Processing Using Designated Turbo-Sliced Core
"... In a virtual machine (VM) consolidation environment, it has been observed that CPU sharing among multiple VMs will lead to I/O processing latency because of the CPU access latency experienced by each VM. In this paper, we present vTurbo, a system that accelerates I/O processing for VMs by offloading ..."
Abstract
-
Cited by 11 (2 self)
- Add to MetaCart
(Show Context)
In a virtual machine (VM) consolidation environment, it has been observed that CPU sharing among multiple VMs will lead to I/O processing latency because of the CPU access latency experienced by each VM. In this paper, we present vTurbo, a system that accelerates I/O processing for VMs by offloading I/O processing to a designated core. More specifically, the designated core – called turbo core – runs with a much smaller time slice (e.g., 0.1ms) than the cores shared by production VMs. Most of the I/O IRQs for the production VMs will be delegated to the turbo core for more timely processing, hence accelerating the I/O processing for the production VMs. Our experiments show that vTurbo significantly improves the VMs ’ network and disk I/O throughput, which consequently translates into application-level performance improvement. 1
vSlicer: Latency-Aware Virtual Machine Scheduling via Differentiated-Frequency CPU Slicing
, 2012
"... Recent advances in virtualization technologies have made it feasible to host multiple virtual machines (VMs) in the same physical host and even the same CPU core, with fair share of the physical resources among the VMs. However, as more VMs share the same core/CPU, the CPU access latency experienced ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
(Show Context)
Recent advances in virtualization technologies have made it feasible to host multiple virtual machines (VMs) in the same physical host and even the same CPU core, with fair share of the physical resources among the VMs. However, as more VMs share the same core/CPU, the CPU access latency experienced by each VM increases substantially, which translates into longer I/O processing latency perceived by I/Obound applications. To mitigate such impact while retaining the benefit of CPU sharing, we introduce a new class of VMs called latency-sensitive VMs (LSVMs), which achieve better performance for I/O-bound applications while maintaining the same resource share (and thus cost) as other CPUsharing VMs. LSVMs are enabled by vSlicer, a hypervisorlevel technique that schedules each LSVM more frequently but with a smaller micro time slice. vSlicer enables more timely processing of I/O events by LSVMs, without violating the CPU share fairness among all sharing VMs. Our evaluation of a vSlicer prototype in Xen shows that vSlicer substantially reduces network packet round-trip times and jitter and improves application-level performance. For example, vSlicer doubles both the connection rate and request processing throughput of an Apache web server; reduces a VoIP server’s upstream jitter by 62%; and shortens the execution times of Intel MPI benchmark programs by half or more.
The TCP Outcast Problem: Exposing Unfairness in Data Center Networks
, 2011
"... In this paper, we observe that bandwidth sharing via TCP in commodity data center networks organized in multi-rooted tree topologies can lead to severe unfairness under common traffic patterns, which we term as the TCP Outcast problem. When many flows and a few flows arrive at two ports of a switch ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
(Show Context)
In this paper, we observe that bandwidth sharing via TCP in commodity data center networks organized in multi-rooted tree topologies can lead to severe unfairness under common traffic patterns, which we term as the TCP Outcast problem. When many flows and a few flows arrive at two ports of a switch destined to one common output port, the small set of flows lose out on their throughput share significantly (almost by an order of magnitude sometimes). The Outcast problem occurs mainly in droptail queues that commodity switches use. Using careful analysis, we discover that droptail queues exhibit a phenomenon known as port blackout, where a series of packets from one port are dropped. Port blackout affects the fewer flows more significantly, as they lose more consecutive packets leading to TCP timeouts. In this paper, we show the existence of this TCP Outcast problem using a data center network testbed using real hardware under different scenarios. We then evaluate different solutions such as RED, SFQ, TCP pacing, and a new solution called equal-length routing to mitigate the Outcast problem.
Detecting Co-Residency with Active Traffic Analysis Techniques
"... Virtualization is the cornerstone of the developing third party compute industry, allowing cloud providers to instantiate multiple virtual machines (VMs) on a single set of physical resources. Customers utilize cloud resources alongside unknown and untrusted parties, creating the co-resident threat ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Virtualization is the cornerstone of the developing third party compute industry, allowing cloud providers to instantiate multiple virtual machines (VMs) on a single set of physical resources. Customers utilize cloud resources alongside unknown and untrusted parties, creating the co-resident threat – unless perfect isolation is provided by the virtual hypervisor, there exists the possibility for unauthorized access to sensitive customer information through the exploitation of covert side channels. This paper presents co-resident watermarking, a traffic analysis attack that allows a malicious co-resident VM to inject a watermark signature into the network flow of a target instance. This watermark can be used to exfiltrate and broadcast co-residency data from the physical machine, compromising isolation without reliance on internal side channels. As a result, our approach is difficult to defend without costly underutilization of the physical machine. We evaluate co-resident watermarking under a large variety of conditions, system loads and hardware configurations, from a local lab environment to production cloud environments (Futuregrid and the University of Oregon’s ACISS). We demonstrate the ability to initiate a covert channel of 4 bits per second, and we can confirm coresidency with a target VM instance in less than 10 seconds. We also show that passive load measurement of the target and subsequent behavior profiling is possible with this attack. Our investigation demonstrates the need for the careful design of hardware to be used in the cloud.
vPipe: One Pipe to Connect Them All!
"... Many enterprises use the cloud to host applications such as web services, big data analytics and storage. One common characteristic among these applications is that, they involve significant I/O activities, moving data from a source to a sink, often without even any intermediate processing. However, ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Many enterprises use the cloud to host applications such as web services, big data analytics and storage. One common characteristic among these applications is that, they involve significant I/O activities, moving data from a source to a sink, often without even any intermediate processing. However, cloud environments tend to be virtualized in nature with tenants obtaining virtual machines (VMs) that often share CPU. Virtualization introduces a significant overhead for I/O activity as data needs to be moved across several protection boundaries. CPU sharing introduces further delays into the overall I/O processing data flow. In this paper, we propose a simple abstraction called vPipe to mitigate these problems. vPipe introduces a simple “pipe ” that can connect data sources and sinks, which can be either files or TCP sockets, at the virtual machine monitor (VMM) layer. Shortcutting the I/O at the VMM layer achieves significant CPU savings and avoids scheduling latencies that degrade I/O throughput. Our evaluation of vPipe prototype on Xen shows that vPipe can improve file transfer throughput significantly while reducing overall CPU utilization. 1
vPipe: piped i/o offloading for efficient data movement in virtualized clouds
- In ACM SOCC
, 2014
"... Virtualization introduces a significant amount of overhead for I/O intensive applications running inside virtual ma-chines (VMs). Such overhead is caused by two main sources: (1) device virtualization and (2) VM scheduling. Device virtualization causes significant CPU overhead as I/O data need to be ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Virtualization introduces a significant amount of overhead for I/O intensive applications running inside virtual ma-chines (VMs). Such overhead is caused by two main sources: (1) device virtualization and (2) VM scheduling. Device virtualization causes significant CPU overhead as I/O data need to be moved across several protection boundaries. VM scheduling introduces delays to the overall I/O processing path due to the wait time of VMs ’ virtual CPUs in the run queue. We observe that such overhead particularly affects many applications involving piped I/O data movements, such as web servers, streaming servers, big data analytics, and storage, because the data has to be transferred first into the application from the source I/O device and then back to the sink I/O device, incurring the virtualization overhead twice. In this paper, we propose vPipe, a programmable framework to mitigate this problem for a wide range of applications running in virtualized clouds. vPipe enables direct “piping” of application I/O data from source to sink devices, either files or TCP sockets, at virtual machine monitor (VMM) level. By doing so, vPipe can avoid both device virtualiza-tion overhead and VM scheduling delays, resulting in im-proved I/O throughput and application performance as well as significant CPU savings.
On detecting co-resident cloud instances using network flow watermarking techniques
, 2013
"... ..."